00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 314 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 2977 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.091 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.092 The recommended git tool is: git 00:00:00.092 using credential 00000000-0000-0000-0000-000000000002 00:00:00.093 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.108 Fetching changes from the remote Git repository 00:00:00.109 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.136 Using shallow fetch with depth 1 00:00:00.136 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.136 > git --version # timeout=10 00:00:00.152 > git --version # 'git version 2.39.2' 00:00:00.152 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.153 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.153 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.934 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.944 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.955 Checking out Revision d55dd09e9e6d4661df5d1073790609767cbcb60c (FETCH_HEAD) 00:00:03.955 > git config core.sparsecheckout # timeout=10 00:00:03.963 > git read-tree -mu HEAD # timeout=10 00:00:03.979 > git checkout -f d55dd09e9e6d4661df5d1073790609767cbcb60c # timeout=5 00:00:04.011 Commit message: "ansible/roles/custom_facts: Add subsystem info to VMDs' nvmes" 00:00:04.011 > git rev-list --no-walk d55dd09e9e6d4661df5d1073790609767cbcb60c # timeout=10 00:00:04.106 [Pipeline] Start of Pipeline 00:00:04.120 [Pipeline] library 00:00:04.122 Loading library shm_lib@master 00:00:04.122 Library shm_lib@master is cached. Copying from home. 00:00:04.141 [Pipeline] node 00:00:04.147 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.149 [Pipeline] { 00:00:04.158 [Pipeline] catchError 00:00:04.160 [Pipeline] { 00:00:04.171 [Pipeline] wrap 00:00:04.181 [Pipeline] { 00:00:04.186 [Pipeline] stage 00:00:04.187 [Pipeline] { (Prologue) 00:00:04.354 [Pipeline] sh 00:00:04.635 + logger -p user.info -t JENKINS-CI 00:00:04.648 [Pipeline] echo 00:00:04.649 Node: GP11 00:00:04.656 [Pipeline] sh 00:00:04.955 [Pipeline] setCustomBuildProperty 00:00:04.964 [Pipeline] echo 00:00:04.964 Cleanup processes 00:00:04.967 [Pipeline] sh 00:00:05.246 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.246 3951721 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.256 [Pipeline] sh 00:00:05.536 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.536 ++ grep -v 'sudo pgrep' 00:00:05.536 ++ awk '{print $1}' 00:00:05.536 + sudo kill -9 00:00:05.536 + true 00:00:05.548 [Pipeline] cleanWs 00:00:05.555 [WS-CLEANUP] Deleting project workspace... 00:00:05.555 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.561 [WS-CLEANUP] done 00:00:05.564 [Pipeline] setCustomBuildProperty 00:00:05.574 [Pipeline] sh 00:00:05.879 + sudo git config --global --replace-all safe.directory '*' 00:00:05.950 [Pipeline] nodesByLabel 00:00:05.951 Found a total of 1 nodes with the 'sorcerer' label 00:00:05.960 [Pipeline] httpRequest 00:00:05.963 HttpMethod: GET 00:00:05.964 URL: http://10.211.164.101/packages/jbp_d55dd09e9e6d4661df5d1073790609767cbcb60c.tar.gz 00:00:05.966 Sending request to url: http://10.211.164.101/packages/jbp_d55dd09e9e6d4661df5d1073790609767cbcb60c.tar.gz 00:00:05.968 Response Code: HTTP/1.1 200 OK 00:00:05.969 Success: Status code 200 is in the accepted range: 200,404 00:00:05.969 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_d55dd09e9e6d4661df5d1073790609767cbcb60c.tar.gz 00:00:07.010 [Pipeline] sh 00:00:07.291 + tar --no-same-owner -xf jbp_d55dd09e9e6d4661df5d1073790609767cbcb60c.tar.gz 00:00:07.313 [Pipeline] httpRequest 00:00:07.318 HttpMethod: GET 00:00:07.319 URL: http://10.211.164.101/packages/spdk_9c9f7ddbbe5483ec0b43cb9e4b82cabcec1f320a.tar.gz 00:00:07.319 Sending request to url: http://10.211.164.101/packages/spdk_9c9f7ddbbe5483ec0b43cb9e4b82cabcec1f320a.tar.gz 00:00:07.345 Response Code: HTTP/1.1 200 OK 00:00:07.346 Success: Status code 200 is in the accepted range: 200,404 00:00:07.346 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_9c9f7ddbbe5483ec0b43cb9e4b82cabcec1f320a.tar.gz 00:01:29.253 [Pipeline] sh 00:01:29.537 + tar --no-same-owner -xf spdk_9c9f7ddbbe5483ec0b43cb9e4b82cabcec1f320a.tar.gz 00:01:32.088 [Pipeline] sh 00:01:32.373 + git -C spdk log --oneline -n5 00:01:32.373 9c9f7ddbb lib/event: make SPDK app exit upon RPC server launch failure 00:01:32.373 13481a596 test: ensure unique address for FIO plugin 00:01:32.373 a5fcb0302 spdk_trace: refactor output format handling 00:01:32.373 1b4773b8f dpdk/crypto: increase RTE_CRYPTO_MAX_DEVS to fit QAT SYM and ASYM VFs 00:01:32.373 bf8dcb56e rpc: add validation for timeout value 00:01:32.394 [Pipeline] withCredentials 00:01:32.406 > git --version # timeout=10 00:01:32.420 > git --version # 'git version 2.39.2' 00:01:32.439 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:32.441 [Pipeline] { 00:01:32.451 [Pipeline] retry 00:01:32.453 [Pipeline] { 00:01:32.470 [Pipeline] sh 00:01:32.751 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:34.142 [Pipeline] } 00:01:34.165 [Pipeline] // retry 00:01:34.172 [Pipeline] } 00:01:34.195 [Pipeline] // withCredentials 00:01:34.211 [Pipeline] httpRequest 00:01:34.215 HttpMethod: GET 00:01:34.216 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:34.216 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:34.219 Response Code: HTTP/1.1 200 OK 00:01:34.219 Success: Status code 200 is in the accepted range: 200,404 00:01:34.220 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:42.214 [Pipeline] sh 00:01:42.502 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:43.900 [Pipeline] sh 00:01:44.186 + git -C dpdk log --oneline -n5 00:01:44.186 eeb0605f11 version: 23.11.0 00:01:44.186 238778122a doc: update release notes for 23.11 00:01:44.186 46aa6b3cfc doc: fix description of RSS features 00:01:44.187 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:44.187 7e421ae345 devtools: support skipping forbid rule check 00:01:44.198 [Pipeline] } 00:01:44.212 [Pipeline] // stage 00:01:44.220 [Pipeline] stage 00:01:44.221 [Pipeline] { (Prepare) 00:01:44.241 [Pipeline] writeFile 00:01:44.257 [Pipeline] sh 00:01:44.543 + logger -p user.info -t JENKINS-CI 00:01:44.556 [Pipeline] sh 00:01:44.864 + logger -p user.info -t JENKINS-CI 00:01:44.877 [Pipeline] sh 00:01:45.164 + cat autorun-spdk.conf 00:01:45.164 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:45.164 SPDK_TEST_NVMF=1 00:01:45.164 SPDK_TEST_NVME_CLI=1 00:01:45.164 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:45.164 SPDK_TEST_NVMF_NICS=e810 00:01:45.164 SPDK_TEST_VFIOUSER=1 00:01:45.164 SPDK_RUN_UBSAN=1 00:01:45.164 NET_TYPE=phy 00:01:45.164 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:45.165 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:45.173 RUN_NIGHTLY=1 00:01:45.177 [Pipeline] readFile 00:01:45.200 [Pipeline] withEnv 00:01:45.202 [Pipeline] { 00:01:45.216 [Pipeline] sh 00:01:45.506 + set -ex 00:01:45.506 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:45.506 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:45.506 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:45.506 ++ SPDK_TEST_NVMF=1 00:01:45.506 ++ SPDK_TEST_NVME_CLI=1 00:01:45.506 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:45.506 ++ SPDK_TEST_NVMF_NICS=e810 00:01:45.506 ++ SPDK_TEST_VFIOUSER=1 00:01:45.507 ++ SPDK_RUN_UBSAN=1 00:01:45.507 ++ NET_TYPE=phy 00:01:45.507 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:45.507 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:45.507 ++ RUN_NIGHTLY=1 00:01:45.507 + case $SPDK_TEST_NVMF_NICS in 00:01:45.507 + DRIVERS=ice 00:01:45.507 + [[ tcp == \r\d\m\a ]] 00:01:45.507 + [[ -n ice ]] 00:01:45.507 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:45.507 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:45.507 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:45.507 rmmod: ERROR: Module irdma is not currently loaded 00:01:45.507 rmmod: ERROR: Module i40iw is not currently loaded 00:01:45.507 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:45.507 + true 00:01:45.507 + for D in $DRIVERS 00:01:45.507 + sudo modprobe ice 00:01:45.507 + exit 0 00:01:45.517 [Pipeline] } 00:01:45.534 [Pipeline] // withEnv 00:01:45.539 [Pipeline] } 00:01:45.555 [Pipeline] // stage 00:01:45.564 [Pipeline] catchError 00:01:45.565 [Pipeline] { 00:01:45.579 [Pipeline] timeout 00:01:45.579 Timeout set to expire in 40 min 00:01:45.581 [Pipeline] { 00:01:45.595 [Pipeline] stage 00:01:45.597 [Pipeline] { (Tests) 00:01:45.611 [Pipeline] sh 00:01:45.899 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:45.899 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:45.899 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:45.899 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:45.899 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:45.899 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:45.899 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:45.899 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:45.899 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:45.899 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:45.899 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:45.899 + source /etc/os-release 00:01:45.899 ++ NAME='Fedora Linux' 00:01:45.899 ++ VERSION='38 (Cloud Edition)' 00:01:45.899 ++ ID=fedora 00:01:45.899 ++ VERSION_ID=38 00:01:45.899 ++ VERSION_CODENAME= 00:01:45.899 ++ PLATFORM_ID=platform:f38 00:01:45.899 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:45.899 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:45.899 ++ LOGO=fedora-logo-icon 00:01:45.899 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:45.899 ++ HOME_URL=https://fedoraproject.org/ 00:01:45.899 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:45.899 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:45.899 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:45.899 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:45.899 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:45.899 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:45.899 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:45.899 ++ SUPPORT_END=2024-05-14 00:01:45.899 ++ VARIANT='Cloud Edition' 00:01:45.899 ++ VARIANT_ID=cloud 00:01:45.899 + uname -a 00:01:45.899 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:45.899 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:46.837 Hugepages 00:01:46.837 node hugesize free / total 00:01:46.837 node0 1048576kB 0 / 0 00:01:46.837 node0 2048kB 0 / 0 00:01:46.837 node1 1048576kB 0 / 0 00:01:46.838 node1 2048kB 0 / 0 00:01:46.838 00:01:46.838 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:46.838 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:46.838 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:46.838 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:46.838 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:46.838 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:46.838 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:46.838 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:46.838 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:46.838 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:46.838 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:46.838 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:46.838 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:46.838 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:46.838 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:46.838 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:46.838 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:47.097 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:47.097 + rm -f /tmp/spdk-ld-path 00:01:47.097 + source autorun-spdk.conf 00:01:47.097 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:47.097 ++ SPDK_TEST_NVMF=1 00:01:47.097 ++ SPDK_TEST_NVME_CLI=1 00:01:47.097 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:47.097 ++ SPDK_TEST_NVMF_NICS=e810 00:01:47.097 ++ SPDK_TEST_VFIOUSER=1 00:01:47.097 ++ SPDK_RUN_UBSAN=1 00:01:47.097 ++ NET_TYPE=phy 00:01:47.097 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:47.097 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:47.097 ++ RUN_NIGHTLY=1 00:01:47.097 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:47.097 + [[ -n '' ]] 00:01:47.097 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:47.097 + for M in /var/spdk/build-*-manifest.txt 00:01:47.097 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:47.097 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:47.097 + for M in /var/spdk/build-*-manifest.txt 00:01:47.097 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:47.097 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:47.097 ++ uname 00:01:47.097 + [[ Linux == \L\i\n\u\x ]] 00:01:47.097 + sudo dmesg -T 00:01:47.097 + sudo dmesg --clear 00:01:47.097 + dmesg_pid=3953039 00:01:47.097 + [[ Fedora Linux == FreeBSD ]] 00:01:47.097 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:47.097 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:47.097 + sudo dmesg -Tw 00:01:47.097 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:47.097 + [[ -x /usr/src/fio-static/fio ]] 00:01:47.097 + export FIO_BIN=/usr/src/fio-static/fio 00:01:47.097 + FIO_BIN=/usr/src/fio-static/fio 00:01:47.097 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:47.097 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:47.097 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:47.097 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:47.097 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:47.097 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:47.097 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:47.097 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:47.097 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:47.097 Test configuration: 00:01:47.097 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:47.097 SPDK_TEST_NVMF=1 00:01:47.097 SPDK_TEST_NVME_CLI=1 00:01:47.097 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:47.097 SPDK_TEST_NVMF_NICS=e810 00:01:47.097 SPDK_TEST_VFIOUSER=1 00:01:47.097 SPDK_RUN_UBSAN=1 00:01:47.097 NET_TYPE=phy 00:01:47.097 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:47.097 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:47.097 RUN_NIGHTLY=1 06:27:51 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:47.097 06:27:51 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:47.097 06:27:51 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:47.097 06:27:51 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:47.097 06:27:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.097 06:27:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.097 06:27:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.097 06:27:51 -- paths/export.sh@5 -- $ export PATH 00:01:47.097 06:27:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.097 06:27:51 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:47.097 06:27:51 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:47.097 06:27:51 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713328071.XXXXXX 00:01:47.097 06:27:51 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713328071.29NLi7 00:01:47.097 06:27:51 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:47.097 06:27:51 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:01:47.097 06:27:51 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:47.097 06:27:51 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:47.097 06:27:51 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:47.097 06:27:51 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:47.097 06:27:51 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:47.097 06:27:51 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:01:47.097 06:27:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.097 06:27:51 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:47.097 06:27:51 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:01:47.097 06:27:51 -- pm/common@17 -- $ local monitor 00:01:47.097 06:27:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.097 06:27:51 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3953075 00:01:47.097 06:27:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.097 06:27:51 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3953077 00:01:47.097 06:27:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.097 06:27:51 -- pm/common@21 -- $ date +%s 00:01:47.097 06:27:51 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3953079 00:01:47.097 06:27:51 -- pm/common@21 -- $ date +%s 00:01:47.097 06:27:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.097 06:27:51 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3953082 00:01:47.097 06:27:51 -- pm/common@21 -- $ date +%s 00:01:47.097 06:27:51 -- pm/common@26 -- $ sleep 1 00:01:47.097 06:27:51 -- pm/common@21 -- $ date +%s 00:01:47.097 06:27:51 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713328071 00:01:47.097 06:27:51 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713328071 00:01:47.097 06:27:51 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713328071 00:01:47.097 06:27:51 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713328071 00:01:47.097 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713328071_collect-vmstat.pm.log 00:01:47.097 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713328071_collect-bmc-pm.bmc.pm.log 00:01:47.097 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713328071_collect-cpu-temp.pm.log 00:01:47.097 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713328071_collect-cpu-load.pm.log 00:01:48.036 06:27:52 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:01:48.036 06:27:52 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:48.036 06:27:52 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:48.036 06:27:52 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:48.036 06:27:52 -- spdk/autobuild.sh@16 -- $ date -u 00:01:48.036 Wed Apr 17 04:27:52 AM UTC 2024 00:01:48.036 06:27:52 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:48.295 v24.05-pre-402-g9c9f7ddbb 00:01:48.295 06:27:52 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:48.295 06:27:52 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:48.295 06:27:52 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:48.295 06:27:52 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:48.295 06:27:52 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:48.295 06:27:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:48.295 ************************************ 00:01:48.295 START TEST ubsan 00:01:48.295 ************************************ 00:01:48.295 06:27:52 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:01:48.295 using ubsan 00:01:48.295 00:01:48.295 real 0m0.000s 00:01:48.295 user 0m0.000s 00:01:48.295 sys 0m0.000s 00:01:48.295 06:27:52 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:48.295 06:27:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:48.295 ************************************ 00:01:48.295 END TEST ubsan 00:01:48.295 ************************************ 00:01:48.295 06:27:52 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:48.295 06:27:52 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:48.295 06:27:52 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:48.295 06:27:52 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:01:48.295 06:27:52 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:48.295 06:27:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:48.295 ************************************ 00:01:48.295 START TEST build_native_dpdk 00:01:48.295 ************************************ 00:01:48.295 06:27:52 -- common/autotest_common.sh@1111 -- $ _build_native_dpdk 00:01:48.295 06:27:52 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:48.295 06:27:52 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:48.295 06:27:52 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:48.295 06:27:52 -- common/autobuild_common.sh@51 -- $ local compiler 00:01:48.295 06:27:52 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:48.295 06:27:52 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:48.295 06:27:52 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:48.295 06:27:52 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:48.295 06:27:52 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:48.295 06:27:52 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:48.295 06:27:52 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:48.295 06:27:52 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:48.295 06:27:52 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:48.295 06:27:52 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:48.295 06:27:52 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:48.295 06:27:52 -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:48.295 06:27:52 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:48.295 06:27:52 -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:48.295 06:27:52 -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:48.295 06:27:52 -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:48.295 eeb0605f11 version: 23.11.0 00:01:48.295 238778122a doc: update release notes for 23.11 00:01:48.295 46aa6b3cfc doc: fix description of RSS features 00:01:48.295 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:48.295 7e421ae345 devtools: support skipping forbid rule check 00:01:48.295 06:27:52 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:48.295 06:27:52 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:48.295 06:27:52 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:48.295 06:27:52 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:48.295 06:27:52 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:48.295 06:27:52 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:48.295 06:27:52 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:48.295 06:27:52 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:48.295 06:27:52 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:48.295 06:27:52 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:48.295 06:27:52 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:48.295 06:27:52 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:48.295 06:27:52 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:48.295 06:27:52 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:48.295 06:27:52 -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:48.295 06:27:52 -- common/autobuild_common.sh@168 -- $ uname -s 00:01:48.295 06:27:52 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:48.295 06:27:52 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:48.295 06:27:52 -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:48.296 06:27:52 -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:48.296 06:27:52 -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:48.296 06:27:52 -- scripts/common.sh@333 -- $ IFS=.-: 00:01:48.296 06:27:52 -- scripts/common.sh@333 -- $ read -ra ver1 00:01:48.296 06:27:52 -- scripts/common.sh@334 -- $ IFS=.-: 00:01:48.296 06:27:52 -- scripts/common.sh@334 -- $ read -ra ver2 00:01:48.296 06:27:52 -- scripts/common.sh@335 -- $ local 'op=<' 00:01:48.296 06:27:52 -- scripts/common.sh@337 -- $ ver1_l=3 00:01:48.296 06:27:52 -- scripts/common.sh@338 -- $ ver2_l=3 00:01:48.296 06:27:52 -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:48.296 06:27:52 -- scripts/common.sh@341 -- $ case "$op" in 00:01:48.296 06:27:52 -- scripts/common.sh@342 -- $ : 1 00:01:48.296 06:27:52 -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:48.296 06:27:52 -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:48.296 06:27:52 -- scripts/common.sh@362 -- $ decimal 23 00:01:48.296 06:27:52 -- scripts/common.sh@350 -- $ local d=23 00:01:48.296 06:27:52 -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:48.296 06:27:52 -- scripts/common.sh@352 -- $ echo 23 00:01:48.296 06:27:52 -- scripts/common.sh@362 -- $ ver1[v]=23 00:01:48.296 06:27:52 -- scripts/common.sh@363 -- $ decimal 21 00:01:48.296 06:27:52 -- scripts/common.sh@350 -- $ local d=21 00:01:48.296 06:27:52 -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:48.296 06:27:52 -- scripts/common.sh@352 -- $ echo 21 00:01:48.296 06:27:52 -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:48.296 06:27:52 -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:48.296 06:27:52 -- scripts/common.sh@364 -- $ return 1 00:01:48.296 06:27:52 -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:48.296 patching file config/rte_config.h 00:01:48.296 Hunk #1 succeeded at 60 (offset 1 line). 00:01:48.296 06:27:52 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:48.296 06:27:52 -- common/autobuild_common.sh@178 -- $ uname -s 00:01:48.296 06:27:52 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:48.296 06:27:52 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:48.296 06:27:52 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:52.502 The Meson build system 00:01:52.502 Version: 1.3.1 00:01:52.502 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:52.502 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:52.502 Build type: native build 00:01:52.502 Program cat found: YES (/usr/bin/cat) 00:01:52.502 Project name: DPDK 00:01:52.502 Project version: 23.11.0 00:01:52.502 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:52.502 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:52.502 Host machine cpu family: x86_64 00:01:52.502 Host machine cpu: x86_64 00:01:52.502 Message: ## Building in Developer Mode ## 00:01:52.502 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:52.502 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:52.503 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:52.503 Program python3 found: YES (/usr/bin/python3) 00:01:52.503 Program cat found: YES (/usr/bin/cat) 00:01:52.503 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:52.503 Compiler for C supports arguments -march=native: YES 00:01:52.503 Checking for size of "void *" : 8 00:01:52.503 Checking for size of "void *" : 8 (cached) 00:01:52.503 Library m found: YES 00:01:52.503 Library numa found: YES 00:01:52.503 Has header "numaif.h" : YES 00:01:52.503 Library fdt found: NO 00:01:52.503 Library execinfo found: NO 00:01:52.503 Has header "execinfo.h" : YES 00:01:52.503 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:52.503 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:52.503 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:52.503 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:52.503 Run-time dependency openssl found: YES 3.0.9 00:01:52.503 Run-time dependency libpcap found: YES 1.10.4 00:01:52.503 Has header "pcap.h" with dependency libpcap: YES 00:01:52.503 Compiler for C supports arguments -Wcast-qual: YES 00:01:52.503 Compiler for C supports arguments -Wdeprecated: YES 00:01:52.503 Compiler for C supports arguments -Wformat: YES 00:01:52.503 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:52.503 Compiler for C supports arguments -Wformat-security: NO 00:01:52.503 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:52.503 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:52.503 Compiler for C supports arguments -Wnested-externs: YES 00:01:52.503 Compiler for C supports arguments -Wold-style-definition: YES 00:01:52.503 Compiler for C supports arguments -Wpointer-arith: YES 00:01:52.503 Compiler for C supports arguments -Wsign-compare: YES 00:01:52.503 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:52.503 Compiler for C supports arguments -Wundef: YES 00:01:52.503 Compiler for C supports arguments -Wwrite-strings: YES 00:01:52.503 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:52.503 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:52.503 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:52.503 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:52.503 Program objdump found: YES (/usr/bin/objdump) 00:01:52.503 Compiler for C supports arguments -mavx512f: YES 00:01:52.503 Checking if "AVX512 checking" compiles: YES 00:01:52.503 Fetching value of define "__SSE4_2__" : 1 00:01:52.503 Fetching value of define "__AES__" : 1 00:01:52.503 Fetching value of define "__AVX__" : 1 00:01:52.503 Fetching value of define "__AVX2__" : (undefined) 00:01:52.503 Fetching value of define "__AVX512BW__" : (undefined) 00:01:52.503 Fetching value of define "__AVX512CD__" : (undefined) 00:01:52.503 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:52.503 Fetching value of define "__AVX512F__" : (undefined) 00:01:52.503 Fetching value of define "__AVX512VL__" : (undefined) 00:01:52.503 Fetching value of define "__PCLMUL__" : 1 00:01:52.503 Fetching value of define "__RDRND__" : 1 00:01:52.503 Fetching value of define "__RDSEED__" : (undefined) 00:01:52.503 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:52.503 Fetching value of define "__znver1__" : (undefined) 00:01:52.503 Fetching value of define "__znver2__" : (undefined) 00:01:52.503 Fetching value of define "__znver3__" : (undefined) 00:01:52.503 Fetching value of define "__znver4__" : (undefined) 00:01:52.503 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:52.503 Message: lib/log: Defining dependency "log" 00:01:52.503 Message: lib/kvargs: Defining dependency "kvargs" 00:01:52.503 Message: lib/telemetry: Defining dependency "telemetry" 00:01:52.503 Checking for function "getentropy" : NO 00:01:52.503 Message: lib/eal: Defining dependency "eal" 00:01:52.503 Message: lib/ring: Defining dependency "ring" 00:01:52.503 Message: lib/rcu: Defining dependency "rcu" 00:01:52.503 Message: lib/mempool: Defining dependency "mempool" 00:01:52.503 Message: lib/mbuf: Defining dependency "mbuf" 00:01:52.503 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:52.503 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:52.503 Compiler for C supports arguments -mpclmul: YES 00:01:52.503 Compiler for C supports arguments -maes: YES 00:01:52.503 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:52.503 Compiler for C supports arguments -mavx512bw: YES 00:01:52.503 Compiler for C supports arguments -mavx512dq: YES 00:01:52.503 Compiler for C supports arguments -mavx512vl: YES 00:01:52.503 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:52.503 Compiler for C supports arguments -mavx2: YES 00:01:52.503 Compiler for C supports arguments -mavx: YES 00:01:52.503 Message: lib/net: Defining dependency "net" 00:01:52.503 Message: lib/meter: Defining dependency "meter" 00:01:52.503 Message: lib/ethdev: Defining dependency "ethdev" 00:01:52.503 Message: lib/pci: Defining dependency "pci" 00:01:52.503 Message: lib/cmdline: Defining dependency "cmdline" 00:01:52.503 Message: lib/metrics: Defining dependency "metrics" 00:01:52.503 Message: lib/hash: Defining dependency "hash" 00:01:52.503 Message: lib/timer: Defining dependency "timer" 00:01:52.503 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:52.503 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:52.503 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:52.503 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:52.503 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:52.503 Message: lib/acl: Defining dependency "acl" 00:01:52.503 Message: lib/bbdev: Defining dependency "bbdev" 00:01:52.503 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:52.503 Run-time dependency libelf found: YES 0.190 00:01:52.503 Message: lib/bpf: Defining dependency "bpf" 00:01:52.503 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:52.503 Message: lib/compressdev: Defining dependency "compressdev" 00:01:52.503 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:52.503 Message: lib/distributor: Defining dependency "distributor" 00:01:52.503 Message: lib/dmadev: Defining dependency "dmadev" 00:01:52.503 Message: lib/efd: Defining dependency "efd" 00:01:52.503 Message: lib/eventdev: Defining dependency "eventdev" 00:01:52.503 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:52.503 Message: lib/gpudev: Defining dependency "gpudev" 00:01:52.503 Message: lib/gro: Defining dependency "gro" 00:01:52.503 Message: lib/gso: Defining dependency "gso" 00:01:52.503 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:52.503 Message: lib/jobstats: Defining dependency "jobstats" 00:01:52.503 Message: lib/latencystats: Defining dependency "latencystats" 00:01:52.503 Message: lib/lpm: Defining dependency "lpm" 00:01:52.503 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:52.503 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:52.503 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:52.503 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:52.503 Message: lib/member: Defining dependency "member" 00:01:52.503 Message: lib/pcapng: Defining dependency "pcapng" 00:01:52.503 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:52.503 Message: lib/power: Defining dependency "power" 00:01:52.503 Message: lib/rawdev: Defining dependency "rawdev" 00:01:52.503 Message: lib/regexdev: Defining dependency "regexdev" 00:01:52.503 Message: lib/mldev: Defining dependency "mldev" 00:01:52.503 Message: lib/rib: Defining dependency "rib" 00:01:52.503 Message: lib/reorder: Defining dependency "reorder" 00:01:52.503 Message: lib/sched: Defining dependency "sched" 00:01:52.503 Message: lib/security: Defining dependency "security" 00:01:52.503 Message: lib/stack: Defining dependency "stack" 00:01:52.503 Has header "linux/userfaultfd.h" : YES 00:01:52.503 Has header "linux/vduse.h" : YES 00:01:52.503 Message: lib/vhost: Defining dependency "vhost" 00:01:52.503 Message: lib/ipsec: Defining dependency "ipsec" 00:01:52.503 Message: lib/pdcp: Defining dependency "pdcp" 00:01:52.503 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:52.503 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:52.503 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:52.503 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:52.503 Message: lib/fib: Defining dependency "fib" 00:01:52.503 Message: lib/port: Defining dependency "port" 00:01:52.503 Message: lib/pdump: Defining dependency "pdump" 00:01:52.503 Message: lib/table: Defining dependency "table" 00:01:52.503 Message: lib/pipeline: Defining dependency "pipeline" 00:01:52.503 Message: lib/graph: Defining dependency "graph" 00:01:52.503 Message: lib/node: Defining dependency "node" 00:01:53.890 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:53.890 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:53.890 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:53.890 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:53.890 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:53.890 Compiler for C supports arguments -Wno-unused-value: YES 00:01:53.890 Compiler for C supports arguments -Wno-format: YES 00:01:53.890 Compiler for C supports arguments -Wno-format-security: YES 00:01:53.890 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:53.890 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:53.890 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:53.890 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:53.890 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:53.890 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:53.890 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:53.890 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:53.890 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:53.890 Has header "sys/epoll.h" : YES 00:01:53.890 Program doxygen found: YES (/usr/bin/doxygen) 00:01:53.890 Configuring doxy-api-html.conf using configuration 00:01:53.890 Configuring doxy-api-man.conf using configuration 00:01:53.890 Program mandb found: YES (/usr/bin/mandb) 00:01:53.890 Program sphinx-build found: NO 00:01:53.890 Configuring rte_build_config.h using configuration 00:01:53.890 Message: 00:01:53.890 ================= 00:01:53.890 Applications Enabled 00:01:53.890 ================= 00:01:53.890 00:01:53.890 apps: 00:01:53.890 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:53.890 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:53.890 test-pmd, test-regex, test-sad, test-security-perf, 00:01:53.890 00:01:53.890 Message: 00:01:53.890 ================= 00:01:53.890 Libraries Enabled 00:01:53.890 ================= 00:01:53.890 00:01:53.890 libs: 00:01:53.890 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:53.890 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:53.890 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:53.890 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:53.890 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:53.890 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:53.890 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:53.890 00:01:53.890 00:01:53.890 Message: 00:01:53.890 =============== 00:01:53.890 Drivers Enabled 00:01:53.890 =============== 00:01:53.890 00:01:53.890 common: 00:01:53.890 00:01:53.890 bus: 00:01:53.890 pci, vdev, 00:01:53.890 mempool: 00:01:53.890 ring, 00:01:53.890 dma: 00:01:53.890 00:01:53.890 net: 00:01:53.890 i40e, 00:01:53.890 raw: 00:01:53.890 00:01:53.890 crypto: 00:01:53.890 00:01:53.890 compress: 00:01:53.890 00:01:53.890 regex: 00:01:53.890 00:01:53.890 ml: 00:01:53.890 00:01:53.890 vdpa: 00:01:53.890 00:01:53.890 event: 00:01:53.890 00:01:53.890 baseband: 00:01:53.890 00:01:53.890 gpu: 00:01:53.890 00:01:53.890 00:01:53.890 Message: 00:01:53.890 ================= 00:01:53.890 Content Skipped 00:01:53.890 ================= 00:01:53.890 00:01:53.890 apps: 00:01:53.890 00:01:53.890 libs: 00:01:53.890 00:01:53.890 drivers: 00:01:53.890 common/cpt: not in enabled drivers build config 00:01:53.890 common/dpaax: not in enabled drivers build config 00:01:53.890 common/iavf: not in enabled drivers build config 00:01:53.890 common/idpf: not in enabled drivers build config 00:01:53.890 common/mvep: not in enabled drivers build config 00:01:53.890 common/octeontx: not in enabled drivers build config 00:01:53.890 bus/auxiliary: not in enabled drivers build config 00:01:53.890 bus/cdx: not in enabled drivers build config 00:01:53.890 bus/dpaa: not in enabled drivers build config 00:01:53.890 bus/fslmc: not in enabled drivers build config 00:01:53.890 bus/ifpga: not in enabled drivers build config 00:01:53.890 bus/platform: not in enabled drivers build config 00:01:53.890 bus/vmbus: not in enabled drivers build config 00:01:53.890 common/cnxk: not in enabled drivers build config 00:01:53.890 common/mlx5: not in enabled drivers build config 00:01:53.890 common/nfp: not in enabled drivers build config 00:01:53.890 common/qat: not in enabled drivers build config 00:01:53.890 common/sfc_efx: not in enabled drivers build config 00:01:53.890 mempool/bucket: not in enabled drivers build config 00:01:53.890 mempool/cnxk: not in enabled drivers build config 00:01:53.890 mempool/dpaa: not in enabled drivers build config 00:01:53.890 mempool/dpaa2: not in enabled drivers build config 00:01:53.890 mempool/octeontx: not in enabled drivers build config 00:01:53.890 mempool/stack: not in enabled drivers build config 00:01:53.890 dma/cnxk: not in enabled drivers build config 00:01:53.890 dma/dpaa: not in enabled drivers build config 00:01:53.890 dma/dpaa2: not in enabled drivers build config 00:01:53.890 dma/hisilicon: not in enabled drivers build config 00:01:53.890 dma/idxd: not in enabled drivers build config 00:01:53.890 dma/ioat: not in enabled drivers build config 00:01:53.890 dma/skeleton: not in enabled drivers build config 00:01:53.890 net/af_packet: not in enabled drivers build config 00:01:53.890 net/af_xdp: not in enabled drivers build config 00:01:53.890 net/ark: not in enabled drivers build config 00:01:53.890 net/atlantic: not in enabled drivers build config 00:01:53.890 net/avp: not in enabled drivers build config 00:01:53.890 net/axgbe: not in enabled drivers build config 00:01:53.890 net/bnx2x: not in enabled drivers build config 00:01:53.890 net/bnxt: not in enabled drivers build config 00:01:53.890 net/bonding: not in enabled drivers build config 00:01:53.890 net/cnxk: not in enabled drivers build config 00:01:53.890 net/cpfl: not in enabled drivers build config 00:01:53.890 net/cxgbe: not in enabled drivers build config 00:01:53.890 net/dpaa: not in enabled drivers build config 00:01:53.890 net/dpaa2: not in enabled drivers build config 00:01:53.890 net/e1000: not in enabled drivers build config 00:01:53.890 net/ena: not in enabled drivers build config 00:01:53.890 net/enetc: not in enabled drivers build config 00:01:53.890 net/enetfec: not in enabled drivers build config 00:01:53.890 net/enic: not in enabled drivers build config 00:01:53.890 net/failsafe: not in enabled drivers build config 00:01:53.890 net/fm10k: not in enabled drivers build config 00:01:53.890 net/gve: not in enabled drivers build config 00:01:53.890 net/hinic: not in enabled drivers build config 00:01:53.890 net/hns3: not in enabled drivers build config 00:01:53.890 net/iavf: not in enabled drivers build config 00:01:53.890 net/ice: not in enabled drivers build config 00:01:53.890 net/idpf: not in enabled drivers build config 00:01:53.890 net/igc: not in enabled drivers build config 00:01:53.890 net/ionic: not in enabled drivers build config 00:01:53.890 net/ipn3ke: not in enabled drivers build config 00:01:53.890 net/ixgbe: not in enabled drivers build config 00:01:53.890 net/mana: not in enabled drivers build config 00:01:53.890 net/memif: not in enabled drivers build config 00:01:53.890 net/mlx4: not in enabled drivers build config 00:01:53.890 net/mlx5: not in enabled drivers build config 00:01:53.890 net/mvneta: not in enabled drivers build config 00:01:53.890 net/mvpp2: not in enabled drivers build config 00:01:53.890 net/netvsc: not in enabled drivers build config 00:01:53.890 net/nfb: not in enabled drivers build config 00:01:53.890 net/nfp: not in enabled drivers build config 00:01:53.890 net/ngbe: not in enabled drivers build config 00:01:53.890 net/null: not in enabled drivers build config 00:01:53.890 net/octeontx: not in enabled drivers build config 00:01:53.890 net/octeon_ep: not in enabled drivers build config 00:01:53.890 net/pcap: not in enabled drivers build config 00:01:53.890 net/pfe: not in enabled drivers build config 00:01:53.890 net/qede: not in enabled drivers build config 00:01:53.890 net/ring: not in enabled drivers build config 00:01:53.890 net/sfc: not in enabled drivers build config 00:01:53.890 net/softnic: not in enabled drivers build config 00:01:53.890 net/tap: not in enabled drivers build config 00:01:53.890 net/thunderx: not in enabled drivers build config 00:01:53.890 net/txgbe: not in enabled drivers build config 00:01:53.890 net/vdev_netvsc: not in enabled drivers build config 00:01:53.890 net/vhost: not in enabled drivers build config 00:01:53.890 net/virtio: not in enabled drivers build config 00:01:53.890 net/vmxnet3: not in enabled drivers build config 00:01:53.890 raw/cnxk_bphy: not in enabled drivers build config 00:01:53.890 raw/cnxk_gpio: not in enabled drivers build config 00:01:53.890 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:53.890 raw/ifpga: not in enabled drivers build config 00:01:53.890 raw/ntb: not in enabled drivers build config 00:01:53.891 raw/skeleton: not in enabled drivers build config 00:01:53.891 crypto/armv8: not in enabled drivers build config 00:01:53.891 crypto/bcmfs: not in enabled drivers build config 00:01:53.891 crypto/caam_jr: not in enabled drivers build config 00:01:53.891 crypto/ccp: not in enabled drivers build config 00:01:53.891 crypto/cnxk: not in enabled drivers build config 00:01:53.891 crypto/dpaa_sec: not in enabled drivers build config 00:01:53.891 crypto/dpaa2_sec: not in enabled drivers build config 00:01:53.891 crypto/ipsec_mb: not in enabled drivers build config 00:01:53.891 crypto/mlx5: not in enabled drivers build config 00:01:53.891 crypto/mvsam: not in enabled drivers build config 00:01:53.891 crypto/nitrox: not in enabled drivers build config 00:01:53.891 crypto/null: not in enabled drivers build config 00:01:53.891 crypto/octeontx: not in enabled drivers build config 00:01:53.891 crypto/openssl: not in enabled drivers build config 00:01:53.891 crypto/scheduler: not in enabled drivers build config 00:01:53.891 crypto/uadk: not in enabled drivers build config 00:01:53.891 crypto/virtio: not in enabled drivers build config 00:01:53.891 compress/isal: not in enabled drivers build config 00:01:53.891 compress/mlx5: not in enabled drivers build config 00:01:53.891 compress/octeontx: not in enabled drivers build config 00:01:53.891 compress/zlib: not in enabled drivers build config 00:01:53.891 regex/mlx5: not in enabled drivers build config 00:01:53.891 regex/cn9k: not in enabled drivers build config 00:01:53.891 ml/cnxk: not in enabled drivers build config 00:01:53.891 vdpa/ifc: not in enabled drivers build config 00:01:53.891 vdpa/mlx5: not in enabled drivers build config 00:01:53.891 vdpa/nfp: not in enabled drivers build config 00:01:53.891 vdpa/sfc: not in enabled drivers build config 00:01:53.891 event/cnxk: not in enabled drivers build config 00:01:53.891 event/dlb2: not in enabled drivers build config 00:01:53.891 event/dpaa: not in enabled drivers build config 00:01:53.891 event/dpaa2: not in enabled drivers build config 00:01:53.891 event/dsw: not in enabled drivers build config 00:01:53.891 event/opdl: not in enabled drivers build config 00:01:53.891 event/skeleton: not in enabled drivers build config 00:01:53.891 event/sw: not in enabled drivers build config 00:01:53.891 event/octeontx: not in enabled drivers build config 00:01:53.891 baseband/acc: not in enabled drivers build config 00:01:53.891 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:53.891 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:53.891 baseband/la12xx: not in enabled drivers build config 00:01:53.891 baseband/null: not in enabled drivers build config 00:01:53.891 baseband/turbo_sw: not in enabled drivers build config 00:01:53.891 gpu/cuda: not in enabled drivers build config 00:01:53.891 00:01:53.891 00:01:53.891 Build targets in project: 220 00:01:53.891 00:01:53.891 DPDK 23.11.0 00:01:53.891 00:01:53.891 User defined options 00:01:53.891 libdir : lib 00:01:53.891 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:53.891 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:53.891 c_link_args : 00:01:53.891 enable_docs : false 00:01:53.891 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:53.891 enable_kmods : false 00:01:53.891 machine : native 00:01:53.891 tests : false 00:01:53.891 00:01:53.891 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:53.891 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:53.891 06:27:58 -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:53.891 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:53.891 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:53.891 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:53.891 [3/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:53.891 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:53.891 [5/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:53.891 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:53.891 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:54.152 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:54.152 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:54.152 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:54.152 [11/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:54.152 [12/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:54.152 [13/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:54.152 [14/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:54.152 [15/710] Linking static target lib/librte_kvargs.a 00:01:54.153 [16/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:54.153 [17/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:54.153 [18/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:54.153 [19/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:54.153 [20/710] Linking static target lib/librte_log.a 00:01:54.419 [21/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:54.419 [22/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.986 [23/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.986 [24/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:54.987 [25/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:54.987 [26/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:54.987 [27/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:54.987 [28/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:54.987 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:54.987 [30/710] Linking target lib/librte_log.so.24.0 00:01:54.987 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:54.987 [32/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:54.987 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:54.987 [34/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:54.987 [35/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:54.987 [36/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:54.987 [37/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:54.987 [38/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:54.987 [39/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:55.249 [40/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:55.249 [41/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:55.249 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:55.249 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:55.249 [44/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:55.249 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:55.249 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:55.249 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:55.249 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:55.249 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:55.249 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:55.249 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:55.249 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:55.249 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:55.249 [54/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:55.249 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:55.249 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:55.249 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:55.249 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:55.249 [59/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:55.249 [60/710] Linking target lib/librte_kvargs.so.24.0 00:01:55.249 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:55.249 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:55.515 [63/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:55.515 [64/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:55.515 [65/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:55.774 [66/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:55.774 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:55.774 [68/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:55.774 [69/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:55.774 [70/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:55.774 [71/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:55.774 [72/710] Linking static target lib/librte_pci.a 00:01:56.038 [73/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:56.038 [74/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:56.038 [75/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:56.038 [76/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:56.038 [77/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:56.038 [78/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:56.038 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:56.038 [80/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:56.313 [81/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:56.313 [82/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.313 [83/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:56.313 [84/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:56.313 [85/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:56.313 [86/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:56.313 [87/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:56.313 [88/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:56.313 [89/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:56.313 [90/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:56.313 [91/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:56.313 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:56.313 [93/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:56.313 [94/710] Linking static target lib/librte_ring.a 00:01:56.313 [95/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:56.313 [96/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:56.313 [97/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:56.313 [98/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:56.313 [99/710] Linking static target lib/librte_meter.a 00:01:56.313 [100/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:56.573 [101/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:56.573 [102/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:56.573 [103/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:56.573 [104/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:56.573 [105/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:56.573 [106/710] Linking static target lib/librte_telemetry.a 00:01:56.573 [107/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:56.573 [108/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:56.573 [109/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:56.573 [110/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:56.573 [111/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:56.835 [112/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:56.835 [113/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:56.835 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:56.835 [115/710] Linking static target lib/librte_eal.a 00:01:56.835 [116/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.835 [117/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.835 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:56.835 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:56.835 [120/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:56.835 [121/710] Linking static target lib/librte_net.a 00:01:57.095 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:57.095 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:57.095 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:57.095 [125/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:57.095 [126/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:57.095 [127/710] Linking static target lib/librte_cmdline.a 00:01:57.358 [128/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:57.358 [129/710] Linking static target lib/librte_mempool.a 00:01:57.358 [130/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.358 [131/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:57.358 [132/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.358 [133/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:57.358 [134/710] Linking static target lib/librte_cfgfile.a 00:01:57.358 [135/710] Linking target lib/librte_telemetry.so.24.0 00:01:57.358 [136/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:57.358 [137/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:57.620 [138/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:57.620 [139/710] Linking static target lib/librte_metrics.a 00:01:57.620 [140/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:57.620 [141/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:57.620 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:57.620 [143/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:57.620 [144/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:57.620 [145/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:57.879 [146/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:57.879 [147/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:57.879 [148/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:57.879 [149/710] Linking static target lib/librte_rcu.a 00:01:57.879 [150/710] Linking static target lib/librte_bitratestats.a 00:01:57.879 [151/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:57.879 [152/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:57.879 [153/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.879 [154/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:57.879 [155/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:58.143 [156/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:58.143 [157/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.143 [158/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:58.143 [159/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:58.143 [160/710] Linking static target lib/librte_timer.a 00:01:58.143 [161/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:58.143 [162/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.143 [163/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.406 [164/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.406 [165/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:58.406 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:58.406 [167/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:58.406 [168/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:58.406 [169/710] Linking static target lib/librte_bbdev.a 00:01:58.406 [170/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:58.406 [171/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.668 [172/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:58.668 [173/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:58.668 [174/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:58.668 [175/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:58.668 [176/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.668 [177/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:58.668 [178/710] Linking static target lib/librte_compressdev.a 00:01:58.933 [179/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:58.933 [180/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:58.933 [181/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:58.933 [182/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:59.198 [183/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:59.198 [184/710] Linking static target lib/librte_distributor.a 00:01:59.198 [185/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:59.198 [186/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:59.198 [187/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.461 [188/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:59.461 [189/710] Linking static target lib/librte_bpf.a 00:01:59.461 [190/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:59.461 [191/710] Linking static target lib/librte_dmadev.a 00:01:59.461 [192/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:59.461 [193/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:59.461 [194/710] Linking static target lib/librte_dispatcher.a 00:01:59.723 [195/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.723 [196/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:59.723 [197/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:59.723 [198/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.723 [199/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:59.723 [200/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:59.723 [201/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:59.723 [202/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:59.723 [203/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:59.723 [204/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:59.723 [205/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:59.723 [206/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:59.723 [207/710] Linking static target lib/librte_gpudev.a 00:01:59.723 [208/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:59.723 [209/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:59.723 [210/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.983 [211/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:59.983 [212/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:59.983 [213/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:59.983 [214/710] Linking static target lib/librte_gro.a 00:01:59.983 [215/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:59.983 [216/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.983 [217/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:59.983 [218/710] Linking static target lib/librte_jobstats.a 00:02:00.248 [219/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:00.249 [220/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:00.249 [221/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.249 [222/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:00.249 [223/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.514 [224/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:00.514 [225/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:00.514 [226/710] Linking static target lib/librte_latencystats.a 00:02:00.514 [227/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:00.514 [228/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.514 [229/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:00.514 [230/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:00.514 [231/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:00.514 [232/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:00.514 [233/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:00.776 [234/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:00.776 [235/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:00.776 [236/710] Linking static target lib/librte_ip_frag.a 00:02:00.776 [237/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:01.041 [238/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.041 [239/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:01.041 [240/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:01.041 [241/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:01.041 [242/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:01.305 [243/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:01.305 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:01.305 [245/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.305 [246/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.305 [247/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:01.305 [248/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:01.305 [249/710] Linking static target lib/librte_gso.a 00:02:01.568 [250/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:01.568 [251/710] Linking static target lib/librte_regexdev.a 00:02:01.568 [252/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:01.568 [253/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:01.568 [254/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:01.568 [255/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:01.568 [256/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:01.568 [257/710] Linking static target lib/librte_rawdev.a 00:02:01.568 [258/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:01.832 [259/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:01.832 [260/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.832 [261/710] Linking static target lib/librte_mldev.a 00:02:01.832 [262/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:01.832 [263/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:01.832 [264/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:01.832 [265/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:01.832 [266/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:01.832 [267/710] Linking static target lib/librte_efd.a 00:02:01.832 [268/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:01.832 [269/710] Linking static target lib/librte_pcapng.a 00:02:02.096 [270/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:02.096 [271/710] Linking static target lib/librte_stack.a 00:02:02.096 [272/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:02.096 [273/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:02.096 [274/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:02.096 [275/710] Linking static target lib/librte_lpm.a 00:02:02.096 [276/710] Linking static target lib/acl/libavx2_tmp.a 00:02:02.096 [277/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:02.096 [278/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:02.096 [279/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:02.359 [280/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:02.359 [281/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.359 [282/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:02.359 [283/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.359 [284/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:02.359 [285/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.359 [286/710] Linking static target lib/librte_hash.a 00:02:02.359 [287/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.359 [288/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:02.359 [289/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:02.359 [290/710] Linking static target lib/librte_reorder.a 00:02:02.359 [291/710] Linking static target lib/acl/libavx512_tmp.a 00:02:02.618 [292/710] Linking static target lib/librte_acl.a 00:02:02.618 [293/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:02.618 [294/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:02.618 [295/710] Linking static target lib/librte_power.a 00:02:02.618 [296/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:02.618 [297/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.618 [298/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.618 [299/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:02.618 [300/710] Linking static target lib/librte_security.a 00:02:02.881 [301/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:02.881 [302/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:02.881 [303/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:02.881 [304/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:02.881 [305/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:02.881 [306/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.881 [307/710] Linking static target lib/librte_mbuf.a 00:02:02.881 [308/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.143 [309/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:03.143 [310/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:03.143 [311/710] Linking static target lib/librte_rib.a 00:02:03.143 [312/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:03.143 [313/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:03.143 [314/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.143 [315/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:03.143 [316/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:03.406 [317/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:03.406 [318/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.406 [319/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:03.406 [320/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:03.406 [321/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:03.406 [322/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:03.406 [323/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:03.406 [324/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:03.406 [325/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:03.406 [326/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.666 [327/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:03.666 [328/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.666 [329/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.666 [330/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:03.666 [331/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.928 [332/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:04.189 [333/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:04.189 [334/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:04.189 [335/710] Linking static target lib/librte_member.a 00:02:04.189 [336/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:04.451 [337/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:04.451 [338/710] Linking static target lib/librte_eventdev.a 00:02:04.451 [339/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:04.451 [340/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:04.451 [341/710] Linking static target lib/librte_cryptodev.a 00:02:04.451 [342/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:04.451 [343/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:04.451 [344/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:04.714 [345/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:04.714 [346/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:04.714 [347/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:04.714 [348/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:04.714 [349/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:04.714 [350/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:04.714 [351/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:04.714 [352/710] Linking static target lib/librte_sched.a 00:02:04.714 [353/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:04.714 [354/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:04.714 [355/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:04.714 [356/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.714 [357/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:04.714 [358/710] Linking static target lib/librte_ethdev.a 00:02:04.714 [359/710] Linking static target lib/librte_fib.a 00:02:04.984 [360/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:04.984 [361/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:04.984 [362/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:04.984 [363/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:04.984 [364/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:04.984 [365/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:05.247 [366/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:05.247 [367/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:05.247 [368/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:05.247 [369/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:05.508 [370/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.508 [371/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.508 [372/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:05.508 [373/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:05.508 [374/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:05.770 [375/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:05.770 [376/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:05.770 [377/710] Linking static target lib/librte_pdump.a 00:02:05.770 [378/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:05.770 [379/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:05.770 [380/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:06.031 [381/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:06.031 [382/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:06.031 [383/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:06.031 [384/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:06.031 [385/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:06.031 [386/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:06.031 [387/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:06.031 [388/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:06.031 [389/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:06.031 [390/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.031 [391/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:06.293 [392/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:06.293 [393/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:06.293 [394/710] Linking static target lib/librte_ipsec.a 00:02:06.293 [395/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:06.293 [396/710] Linking static target lib/librte_table.a 00:02:06.556 [397/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.556 [398/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:06.556 [399/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:06.556 [400/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:06.822 [401/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.822 [402/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:07.081 [403/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:07.081 [404/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:07.081 [405/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:07.341 [406/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:07.341 [407/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:07.341 [408/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:07.341 [409/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:07.341 [410/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:07.341 [411/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:07.341 [412/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:07.624 [413/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.624 [414/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:07.624 [415/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:07.624 [416/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:07.624 [417/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:07.624 [418/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:07.624 [419/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.896 [420/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:07.896 [421/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:07.896 [422/710] Linking static target drivers/librte_bus_vdev.a 00:02:07.896 [423/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:07.896 [424/710] Linking static target lib/librte_port.a 00:02:07.896 [425/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:07.896 [426/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:07.896 [427/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.158 [428/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:08.158 [429/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:08.158 [430/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:08.158 [431/710] Linking static target drivers/librte_bus_pci.a 00:02:08.158 [432/710] Linking target lib/librte_eal.so.24.0 00:02:08.158 [433/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:08.158 [434/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.158 [435/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:08.158 [436/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:08.158 [437/710] Linking static target lib/librte_graph.a 00:02:08.419 [438/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:08.419 [439/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:08.419 [440/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:08.419 [441/710] Linking target lib/librte_ring.so.24.0 00:02:08.419 [442/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:08.685 [443/710] Linking target lib/librte_meter.so.24.0 00:02:08.685 [444/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:08.685 [445/710] Linking target lib/librte_pci.so.24.0 00:02:08.685 [446/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:08.685 [447/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.948 [448/710] Linking target lib/librte_rcu.so.24.0 00:02:08.948 [449/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:08.948 [450/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:08.948 [451/710] Linking target lib/librte_mempool.so.24.0 00:02:08.948 [452/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:08.948 [453/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:08.948 [454/710] Linking target lib/librte_timer.so.24.0 00:02:08.948 [455/710] Linking target lib/librte_acl.so.24.0 00:02:08.948 [456/710] Linking target lib/librte_dmadev.so.24.0 00:02:08.948 [457/710] Linking target lib/librte_cfgfile.so.24.0 00:02:08.948 [458/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:08.948 [459/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.948 [460/710] Linking target lib/librte_jobstats.so.24.0 00:02:08.948 [461/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:08.948 [462/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:08.948 [463/710] Linking target lib/librte_rawdev.so.24.0 00:02:08.948 [464/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:09.211 [465/710] Linking target lib/librte_stack.so.24.0 00:02:09.211 [466/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:09.211 [467/710] Linking target drivers/librte_bus_pci.so.24.0 00:02:09.211 [468/710] Linking target drivers/librte_bus_vdev.so.24.0 00:02:09.211 [469/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:09.211 [470/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:09.211 [471/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:09.211 [472/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:09.211 [473/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:09.211 [474/710] Linking target lib/librte_rib.so.24.0 00:02:09.211 [475/710] Linking target lib/librte_mbuf.so.24.0 00:02:09.211 [476/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:09.211 [477/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:09.211 [478/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:09.211 [479/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.211 [480/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:09.211 [481/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:09.479 [482/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:09.479 [483/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:09.479 [484/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:09.479 [485/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:09.479 [486/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:09.479 [487/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:09.479 [488/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:09.479 [489/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:09.479 [490/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:09.479 [491/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:09.479 [492/710] Linking static target drivers/librte_mempool_ring.a 00:02:09.479 [493/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:09.479 [494/710] Linking target lib/librte_bbdev.so.24.0 00:02:09.479 [495/710] Linking target lib/librte_net.so.24.0 00:02:09.479 [496/710] Linking target lib/librte_compressdev.so.24.0 00:02:09.479 [497/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:09.479 [498/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:09.479 [499/710] Linking target lib/librte_fib.so.24.0 00:02:09.479 [500/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:09.479 [501/710] Linking target lib/librte_cryptodev.so.24.0 00:02:09.479 [502/710] Linking target lib/librte_gpudev.so.24.0 00:02:09.479 [503/710] Linking target lib/librte_distributor.so.24.0 00:02:09.743 [504/710] Linking target lib/librte_regexdev.so.24.0 00:02:09.743 [505/710] Linking target lib/librte_mldev.so.24.0 00:02:09.743 [506/710] Linking target lib/librte_reorder.so.24.0 00:02:09.743 [507/710] Linking target drivers/librte_mempool_ring.so.24.0 00:02:09.743 [508/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:09.743 [509/710] Linking target lib/librte_sched.so.24.0 00:02:09.743 [510/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:09.743 [511/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:09.743 [512/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:09.743 [513/710] Linking target lib/librte_cmdline.so.24.0 00:02:10.008 [514/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:10.008 [515/710] Linking target lib/librte_hash.so.24.0 00:02:10.008 [516/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:10.008 [517/710] Linking target lib/librte_security.so.24.0 00:02:10.008 [518/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:10.271 [519/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:10.271 [520/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:10.271 [521/710] Linking target lib/librte_efd.so.24.0 00:02:10.271 [522/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:10.271 [523/710] Linking target lib/librte_lpm.so.24.0 00:02:10.271 [524/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:10.271 [525/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:10.271 [526/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:10.271 [527/710] Linking target lib/librte_member.so.24.0 00:02:10.271 [528/710] Linking target lib/librte_ipsec.so.24.0 00:02:10.536 [529/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:10.536 [530/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:10.536 [531/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:10.536 [532/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:10.796 [533/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:10.796 [534/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:10.796 [535/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:10.796 [536/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:10.796 [537/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:11.058 [538/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:11.058 [539/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:11.058 [540/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:11.058 [541/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:11.321 [542/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:11.321 [543/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:11.321 [544/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:11.321 [545/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:11.321 [546/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:11.321 [547/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:11.321 [548/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:11.592 [549/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:11.592 [550/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:11.592 [551/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:11.592 [552/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:11.592 [553/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:11.854 [554/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:11.854 [555/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:11.854 [556/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:11.854 [557/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:11.854 [558/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:11.854 [559/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:12.430 [560/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:12.430 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:12.693 [562/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:12.693 [563/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:12.693 [564/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:12.693 [565/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:12.955 [566/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.955 [567/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:12.955 [568/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:12.955 [569/710] Linking target lib/librte_ethdev.so.24.0 00:02:12.955 [570/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:12.955 [571/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:12.955 [572/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:13.219 [573/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:13.219 [574/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:13.219 [575/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:13.219 [576/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:13.219 [577/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:13.219 [578/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:13.219 [579/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:13.219 [580/710] Linking target lib/librte_metrics.so.24.0 00:02:13.483 [581/710] Linking target lib/librte_bpf.so.24.0 00:02:13.483 [582/710] Linking target lib/librte_eventdev.so.24.0 00:02:13.483 [583/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:13.483 [584/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:13.483 [585/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:13.483 [586/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:13.745 [587/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:13.745 [588/710] Linking target lib/librte_gro.so.24.0 00:02:13.745 [589/710] Linking target lib/librte_gso.so.24.0 00:02:13.745 [590/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:13.745 [591/710] Linking target lib/librte_ip_frag.so.24.0 00:02:13.745 [592/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:13.745 [593/710] Linking target lib/librte_latencystats.so.24.0 00:02:13.745 [594/710] Linking static target lib/librte_pdcp.a 00:02:13.745 [595/710] Linking target lib/librte_bitratestats.so.24.0 00:02:13.745 [596/710] Linking target lib/librte_pcapng.so.24.0 00:02:13.745 [597/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:13.745 [598/710] Linking target lib/librte_power.so.24.0 00:02:13.745 [599/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:13.745 [600/710] Linking target lib/librte_dispatcher.so.24.0 00:02:13.745 [601/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:14.008 [602/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:14.008 [603/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:14.008 [604/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:14.008 [605/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:14.008 [606/710] Linking target lib/librte_port.so.24.0 00:02:14.008 [607/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:14.008 [608/710] Linking target lib/librte_pdump.so.24.0 00:02:14.008 [609/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:14.272 [610/710] Linking target lib/librte_graph.so.24.0 00:02:14.272 [611/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:14.272 [612/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:14.272 [613/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:14.272 [614/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:14.272 [615/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.272 [616/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:14.272 [617/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:14.272 [618/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:14.272 [619/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:14.272 [620/710] Linking target lib/librte_pdcp.so.24.0 00:02:14.272 [621/710] Linking target lib/librte_table.so.24.0 00:02:14.531 [622/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:14.531 [623/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:14.531 [624/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:14.531 [625/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:14.531 [626/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:14.531 [627/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:14.531 [628/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:15.101 [629/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:15.101 [630/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:15.102 [631/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:15.102 [632/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:15.102 [633/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:15.360 [634/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:15.361 [635/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:15.361 [636/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:15.621 [637/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:15.621 [638/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:15.621 [639/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:15.621 [640/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:15.621 [641/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:15.621 [642/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:15.880 [643/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:15.880 [644/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:15.880 [645/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:15.880 [646/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:16.139 [647/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:16.139 [648/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:16.139 [649/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:16.139 [650/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:16.139 [651/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:16.139 [652/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:16.139 [653/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:16.397 [654/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:16.397 [655/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:16.656 [656/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:16.656 [657/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:16.656 [658/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:16.656 [659/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:16.656 [660/710] Linking static target drivers/librte_net_i40e.a 00:02:16.656 [661/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:16.656 [662/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:16.914 [663/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:17.172 [664/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:17.172 [665/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.172 [666/710] Linking target drivers/librte_net_i40e.so.24.0 00:02:17.430 [667/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:17.430 [668/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:17.689 [669/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:17.947 [670/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:18.204 [671/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:18.204 [672/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:18.204 [673/710] Linking static target lib/librte_node.a 00:02:18.462 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:18.462 [675/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.720 [676/710] Linking target lib/librte_node.so.24.0 00:02:19.653 [677/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:19.653 [678/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:19.910 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:21.807 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:22.372 [681/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:27.636 [682/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:59.700 [683/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:59.700 [684/710] Linking static target lib/librte_vhost.a 00:02:59.700 [685/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.700 [686/710] Linking target lib/librte_vhost.so.24.0 00:03:14.607 [687/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:14.607 [688/710] Linking static target lib/librte_pipeline.a 00:03:14.607 [689/710] Linking target app/dpdk-dumpcap 00:03:14.607 [690/710] Linking target app/dpdk-test-dma-perf 00:03:14.607 [691/710] Linking target app/dpdk-test-cmdline 00:03:14.607 [692/710] Linking target app/dpdk-test-acl 00:03:14.607 [693/710] Linking target app/dpdk-proc-info 00:03:14.607 [694/710] Linking target app/dpdk-pdump 00:03:14.607 [695/710] Linking target app/dpdk-test-gpudev 00:03:14.607 [696/710] Linking target app/dpdk-test-bbdev 00:03:14.607 [697/710] Linking target app/dpdk-test-flow-perf 00:03:14.607 [698/710] Linking target app/dpdk-test-regex 00:03:14.607 [699/710] Linking target app/dpdk-test-crypto-perf 00:03:14.607 [700/710] Linking target app/dpdk-test-pipeline 00:03:14.607 [701/710] Linking target app/dpdk-test-mldev 00:03:14.607 [702/710] Linking target app/dpdk-test-sad 00:03:14.607 [703/710] Linking target app/dpdk-test-fib 00:03:14.607 [704/710] Linking target app/dpdk-graph 00:03:14.607 [705/710] Linking target app/dpdk-test-security-perf 00:03:14.607 [706/710] Linking target app/dpdk-test-compress-perf 00:03:14.607 [707/710] Linking target app/dpdk-test-eventdev 00:03:14.607 [708/710] Linking target app/dpdk-testpmd 00:03:14.865 [709/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.123 [710/710] Linking target lib/librte_pipeline.so.24.0 00:03:15.123 06:29:19 -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:03:15.123 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:15.123 [0/1] Installing files. 00:03:15.385 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.385 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:15.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:15.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:15.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:15.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:15.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:15.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:15.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:15.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:15.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:15.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:15.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:15.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:15.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:15.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:15.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:15.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:15.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:15.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:15.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:15.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:15.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:15.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:15.391 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:15.650 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:16.222 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:16.222 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:16.222 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.222 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:16.222 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:16.222 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:16.222 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:16.222 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:16.222 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:16.222 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:16.222 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:16.222 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:16.222 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:16.222 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:16.222 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:16.222 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:16.222 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:16.222 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:16.222 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:16.222 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:16.222 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:16.222 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:16.222 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:16.222 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:16.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:16.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:16.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:16.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:16.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:16.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:16.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:16.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:16.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:16.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:16.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:16.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:16.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.223 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.224 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.225 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:16.226 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:16.226 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:03:16.226 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:03:16.226 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:03:16.226 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:16.227 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:03:16.227 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:16.227 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:03:16.227 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:16.227 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:03:16.227 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:16.227 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:03:16.227 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:16.227 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:03:16.227 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:16.227 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:03:16.227 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:16.227 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:03:16.227 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:16.227 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:03:16.227 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:16.227 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:03:16.227 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:16.227 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:03:16.227 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:16.227 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:03:16.227 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:16.227 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:03:16.227 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:16.227 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:03:16.227 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:16.227 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:03:16.227 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:16.227 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:03:16.227 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:16.227 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:03:16.227 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:16.227 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:03:16.227 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:16.227 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:03:16.227 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:16.227 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:03:16.227 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:16.227 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:03:16.227 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:16.227 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:03:16.227 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:16.228 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:03:16.228 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:16.228 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:03:16.228 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:16.228 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:03:16.228 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:16.228 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:03:16.228 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:16.228 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:03:16.228 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:03:16.228 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:03:16.228 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:16.228 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:03:16.228 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:16.228 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:03:16.228 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:16.228 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:03:16.228 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:16.228 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:03:16.228 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:16.228 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:03:16.228 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:16.228 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:03:16.228 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:16.228 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:03:16.228 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:16.228 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:03:16.228 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:16.228 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:03:16.228 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:16.228 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:16.228 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:16.228 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:16.228 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:16.228 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:16.228 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:16.228 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:16.228 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:16.228 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:16.228 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:16.228 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:16.228 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:16.228 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:03:16.228 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:16.228 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:03:16.228 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:16.228 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:03:16.228 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:03:16.228 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:03:16.228 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:16.228 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:03:16.228 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:16.228 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:03:16.228 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:16.228 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:03:16.228 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:16.228 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:03:16.228 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:16.228 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:03:16.228 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:16.228 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:03:16.228 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:16.228 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:03:16.228 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:03:16.228 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:03:16.228 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:16.228 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:03:16.228 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:16.228 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:03:16.228 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:16.228 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:03:16.228 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:16.228 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:03:16.228 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:16.228 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:03:16.228 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:16.228 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:03:16.228 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:16.228 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:16.228 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:16.228 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:16.228 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:16.228 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:16.228 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:16.228 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:16.228 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:16.228 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:16.487 06:29:20 -- common/autobuild_common.sh@189 -- $ uname -s 00:03:16.487 06:29:20 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:16.487 06:29:20 -- common/autobuild_common.sh@200 -- $ cat 00:03:16.487 06:29:20 -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:16.487 00:03:16.487 real 1m27.999s 00:03:16.487 user 17m58.618s 00:03:16.487 sys 2m6.909s 00:03:16.487 06:29:20 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:03:16.487 06:29:20 -- common/autotest_common.sh@10 -- $ set +x 00:03:16.487 ************************************ 00:03:16.487 END TEST build_native_dpdk 00:03:16.487 ************************************ 00:03:16.487 06:29:20 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:16.487 06:29:20 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:16.487 06:29:20 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:16.487 06:29:20 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:16.487 06:29:20 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:16.487 06:29:20 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:16.487 06:29:20 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:16.487 06:29:20 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:16.487 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:16.487 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:16.487 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:16.487 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:16.744 Using 'verbs' RDMA provider 00:03:27.304 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:35.433 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:35.692 Creating mk/config.mk...done. 00:03:35.692 Creating mk/cc.flags.mk...done. 00:03:35.692 Type 'make' to build. 00:03:35.692 06:29:40 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:35.692 06:29:40 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:03:35.692 06:29:40 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:03:35.692 06:29:40 -- common/autotest_common.sh@10 -- $ set +x 00:03:35.692 ************************************ 00:03:35.692 START TEST make 00:03:35.692 ************************************ 00:03:35.692 06:29:40 -- common/autotest_common.sh@1111 -- $ make -j48 00:03:35.951 make[1]: Nothing to be done for 'all'. 00:03:37.385 The Meson build system 00:03:37.385 Version: 1.3.1 00:03:37.385 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:37.385 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:37.385 Build type: native build 00:03:37.385 Project name: libvfio-user 00:03:37.385 Project version: 0.0.1 00:03:37.385 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:37.385 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:37.385 Host machine cpu family: x86_64 00:03:37.385 Host machine cpu: x86_64 00:03:37.385 Run-time dependency threads found: YES 00:03:37.385 Library dl found: YES 00:03:37.385 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:37.385 Run-time dependency json-c found: YES 0.17 00:03:37.385 Run-time dependency cmocka found: YES 1.1.7 00:03:37.385 Program pytest-3 found: NO 00:03:37.385 Program flake8 found: NO 00:03:37.385 Program misspell-fixer found: NO 00:03:37.385 Program restructuredtext-lint found: NO 00:03:37.385 Program valgrind found: YES (/usr/bin/valgrind) 00:03:37.385 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:37.385 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:37.385 Compiler for C supports arguments -Wwrite-strings: YES 00:03:37.385 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:37.385 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:37.385 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:37.385 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:37.385 Build targets in project: 8 00:03:37.385 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:37.385 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:37.385 00:03:37.385 libvfio-user 0.0.1 00:03:37.385 00:03:37.385 User defined options 00:03:37.385 buildtype : debug 00:03:37.385 default_library: shared 00:03:37.385 libdir : /usr/local/lib 00:03:37.385 00:03:37.385 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:38.332 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:38.597 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:38.597 [2/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:38.597 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:38.597 [4/37] Compiling C object samples/null.p/null.c.o 00:03:38.597 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:38.597 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:38.597 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:38.597 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:38.597 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:38.597 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:38.597 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:38.597 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:38.597 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:38.597 [14/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:38.597 [15/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:38.597 [16/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:38.597 [17/37] Compiling C object samples/server.p/server.c.o 00:03:38.597 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:38.597 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:38.597 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:38.597 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:38.597 [22/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:38.597 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:38.597 [24/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:38.597 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:38.858 [26/37] Compiling C object samples/client.p/client.c.o 00:03:38.859 [27/37] Linking target samples/client 00:03:38.859 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:38.859 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:38.859 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:38.859 [31/37] Linking target test/unit_tests 00:03:39.117 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:39.117 [33/37] Linking target samples/null 00:03:39.118 [34/37] Linking target samples/gpio-pci-idio-16 00:03:39.118 [35/37] Linking target samples/shadow_ioeventfd_server 00:03:39.118 [36/37] Linking target samples/lspci 00:03:39.118 [37/37] Linking target samples/server 00:03:39.118 INFO: autodetecting backend as ninja 00:03:39.118 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:39.118 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:40.065 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:40.065 ninja: no work to do. 00:03:52.257 CC lib/log/log.o 00:03:52.257 CC lib/log/log_flags.o 00:03:52.257 CC lib/log/log_deprecated.o 00:03:52.257 CC lib/ut_mock/mock.o 00:03:52.257 CC lib/ut/ut.o 00:03:52.257 LIB libspdk_ut_mock.a 00:03:52.257 SO libspdk_ut_mock.so.6.0 00:03:52.257 LIB libspdk_log.a 00:03:52.257 LIB libspdk_ut.a 00:03:52.257 SO libspdk_ut.so.2.0 00:03:52.257 SO libspdk_log.so.7.0 00:03:52.257 SYMLINK libspdk_ut_mock.so 00:03:52.257 SYMLINK libspdk_ut.so 00:03:52.257 SYMLINK libspdk_log.so 00:03:52.257 CC lib/dma/dma.o 00:03:52.257 CXX lib/trace_parser/trace.o 00:03:52.257 CC lib/ioat/ioat.o 00:03:52.257 CC lib/util/base64.o 00:03:52.257 CC lib/util/bit_array.o 00:03:52.257 CC lib/util/cpuset.o 00:03:52.257 CC lib/util/crc16.o 00:03:52.257 CC lib/util/crc32.o 00:03:52.257 CC lib/util/crc32c.o 00:03:52.257 CC lib/util/crc32_ieee.o 00:03:52.257 CC lib/util/crc64.o 00:03:52.257 CC lib/util/dif.o 00:03:52.257 CC lib/util/fd.o 00:03:52.257 CC lib/util/file.o 00:03:52.257 CC lib/util/hexlify.o 00:03:52.257 CC lib/util/iov.o 00:03:52.257 CC lib/util/math.o 00:03:52.257 CC lib/util/pipe.o 00:03:52.257 CC lib/util/strerror_tls.o 00:03:52.257 CC lib/util/string.o 00:03:52.257 CC lib/util/uuid.o 00:03:52.257 CC lib/util/fd_group.o 00:03:52.257 CC lib/util/xor.o 00:03:52.257 CC lib/util/zipf.o 00:03:52.257 CC lib/vfio_user/host/vfio_user_pci.o 00:03:52.257 CC lib/vfio_user/host/vfio_user.o 00:03:52.257 LIB libspdk_dma.a 00:03:52.257 SO libspdk_dma.so.4.0 00:03:52.257 SYMLINK libspdk_dma.so 00:03:52.257 LIB libspdk_ioat.a 00:03:52.257 SO libspdk_ioat.so.7.0 00:03:52.257 SYMLINK libspdk_ioat.so 00:03:52.257 LIB libspdk_vfio_user.a 00:03:52.257 SO libspdk_vfio_user.so.5.0 00:03:52.257 SYMLINK libspdk_vfio_user.so 00:03:52.257 LIB libspdk_util.a 00:03:52.257 SO libspdk_util.so.9.0 00:03:52.257 SYMLINK libspdk_util.so 00:03:52.515 CC lib/json/json_parse.o 00:03:52.515 CC lib/vmd/vmd.o 00:03:52.515 CC lib/rdma/common.o 00:03:52.515 CC lib/conf/conf.o 00:03:52.515 CC lib/idxd/idxd.o 00:03:52.515 CC lib/json/json_util.o 00:03:52.515 CC lib/vmd/led.o 00:03:52.515 CC lib/rdma/rdma_verbs.o 00:03:52.515 CC lib/env_dpdk/env.o 00:03:52.515 CC lib/idxd/idxd_user.o 00:03:52.515 CC lib/json/json_write.o 00:03:52.515 CC lib/env_dpdk/memory.o 00:03:52.515 CC lib/env_dpdk/pci.o 00:03:52.515 CC lib/env_dpdk/init.o 00:03:52.515 CC lib/env_dpdk/threads.o 00:03:52.515 CC lib/env_dpdk/pci_ioat.o 00:03:52.515 CC lib/env_dpdk/pci_virtio.o 00:03:52.515 CC lib/env_dpdk/pci_vmd.o 00:03:52.515 CC lib/env_dpdk/pci_idxd.o 00:03:52.515 CC lib/env_dpdk/pci_event.o 00:03:52.515 CC lib/env_dpdk/sigbus_handler.o 00:03:52.515 CC lib/env_dpdk/pci_dpdk.o 00:03:52.515 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:52.515 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:52.515 LIB libspdk_trace_parser.a 00:03:52.515 SO libspdk_trace_parser.so.5.0 00:03:52.773 SYMLINK libspdk_trace_parser.so 00:03:52.773 LIB libspdk_conf.a 00:03:52.773 SO libspdk_conf.so.6.0 00:03:52.773 LIB libspdk_rdma.a 00:03:52.773 SYMLINK libspdk_conf.so 00:03:52.773 LIB libspdk_json.a 00:03:52.773 SO libspdk_rdma.so.6.0 00:03:52.773 SO libspdk_json.so.6.0 00:03:52.773 SYMLINK libspdk_rdma.so 00:03:52.773 SYMLINK libspdk_json.so 00:03:53.031 CC lib/jsonrpc/jsonrpc_server.o 00:03:53.031 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:53.031 CC lib/jsonrpc/jsonrpc_client.o 00:03:53.031 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:53.031 LIB libspdk_idxd.a 00:03:53.031 SO libspdk_idxd.so.12.0 00:03:53.031 SYMLINK libspdk_idxd.so 00:03:53.031 LIB libspdk_vmd.a 00:03:53.031 SO libspdk_vmd.so.6.0 00:03:53.289 SYMLINK libspdk_vmd.so 00:03:53.289 LIB libspdk_jsonrpc.a 00:03:53.290 SO libspdk_jsonrpc.so.6.0 00:03:53.290 SYMLINK libspdk_jsonrpc.so 00:03:53.548 CC lib/rpc/rpc.o 00:03:53.806 LIB libspdk_rpc.a 00:03:53.806 SO libspdk_rpc.so.6.0 00:03:53.806 SYMLINK libspdk_rpc.so 00:03:54.064 CC lib/trace/trace.o 00:03:54.064 CC lib/notify/notify.o 00:03:54.064 CC lib/trace/trace_flags.o 00:03:54.064 CC lib/notify/notify_rpc.o 00:03:54.064 CC lib/trace/trace_rpc.o 00:03:54.064 CC lib/keyring/keyring.o 00:03:54.064 CC lib/keyring/keyring_rpc.o 00:03:54.064 LIB libspdk_notify.a 00:03:54.322 SO libspdk_notify.so.6.0 00:03:54.322 LIB libspdk_trace.a 00:03:54.322 LIB libspdk_keyring.a 00:03:54.322 SYMLINK libspdk_notify.so 00:03:54.322 SO libspdk_keyring.so.1.0 00:03:54.322 SO libspdk_trace.so.10.0 00:03:54.322 SYMLINK libspdk_keyring.so 00:03:54.322 SYMLINK libspdk_trace.so 00:03:54.322 LIB libspdk_env_dpdk.a 00:03:54.580 CC lib/thread/thread.o 00:03:54.580 CC lib/thread/iobuf.o 00:03:54.580 CC lib/sock/sock.o 00:03:54.580 CC lib/sock/sock_rpc.o 00:03:54.580 SO libspdk_env_dpdk.so.14.0 00:03:54.580 SYMLINK libspdk_env_dpdk.so 00:03:54.838 LIB libspdk_sock.a 00:03:54.838 SO libspdk_sock.so.9.0 00:03:54.838 SYMLINK libspdk_sock.so 00:03:55.096 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:55.096 CC lib/nvme/nvme_ctrlr.o 00:03:55.096 CC lib/nvme/nvme_fabric.o 00:03:55.096 CC lib/nvme/nvme_ns_cmd.o 00:03:55.096 CC lib/nvme/nvme_ns.o 00:03:55.096 CC lib/nvme/nvme_pcie_common.o 00:03:55.096 CC lib/nvme/nvme_pcie.o 00:03:55.096 CC lib/nvme/nvme_qpair.o 00:03:55.096 CC lib/nvme/nvme.o 00:03:55.096 CC lib/nvme/nvme_quirks.o 00:03:55.096 CC lib/nvme/nvme_transport.o 00:03:55.096 CC lib/nvme/nvme_discovery.o 00:03:55.096 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:55.096 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:55.096 CC lib/nvme/nvme_tcp.o 00:03:55.096 CC lib/nvme/nvme_opal.o 00:03:55.096 CC lib/nvme/nvme_io_msg.o 00:03:55.096 CC lib/nvme/nvme_poll_group.o 00:03:55.096 CC lib/nvme/nvme_zns.o 00:03:55.096 CC lib/nvme/nvme_stubs.o 00:03:55.096 CC lib/nvme/nvme_auth.o 00:03:55.096 CC lib/nvme/nvme_cuse.o 00:03:55.096 CC lib/nvme/nvme_vfio_user.o 00:03:55.096 CC lib/nvme/nvme_rdma.o 00:03:56.030 LIB libspdk_thread.a 00:03:56.030 SO libspdk_thread.so.10.0 00:03:56.288 SYMLINK libspdk_thread.so 00:03:56.288 CC lib/accel/accel.o 00:03:56.288 CC lib/vfu_tgt/tgt_endpoint.o 00:03:56.288 CC lib/virtio/virtio.o 00:03:56.288 CC lib/init/json_config.o 00:03:56.288 CC lib/accel/accel_rpc.o 00:03:56.288 CC lib/virtio/virtio_vhost_user.o 00:03:56.288 CC lib/accel/accel_sw.o 00:03:56.288 CC lib/vfu_tgt/tgt_rpc.o 00:03:56.288 CC lib/blob/blobstore.o 00:03:56.288 CC lib/virtio/virtio_vfio_user.o 00:03:56.288 CC lib/init/subsystem.o 00:03:56.288 CC lib/virtio/virtio_pci.o 00:03:56.288 CC lib/blob/request.o 00:03:56.289 CC lib/init/subsystem_rpc.o 00:03:56.289 CC lib/blob/zeroes.o 00:03:56.289 CC lib/init/rpc.o 00:03:56.289 CC lib/blob/blob_bs_dev.o 00:03:56.546 LIB libspdk_init.a 00:03:56.546 SO libspdk_init.so.5.0 00:03:56.804 LIB libspdk_vfu_tgt.a 00:03:56.804 LIB libspdk_virtio.a 00:03:56.804 SYMLINK libspdk_init.so 00:03:56.804 SO libspdk_vfu_tgt.so.3.0 00:03:56.804 SO libspdk_virtio.so.7.0 00:03:56.804 SYMLINK libspdk_vfu_tgt.so 00:03:56.804 SYMLINK libspdk_virtio.so 00:03:56.804 CC lib/event/app.o 00:03:56.804 CC lib/event/reactor.o 00:03:56.804 CC lib/event/log_rpc.o 00:03:56.804 CC lib/event/app_rpc.o 00:03:56.804 CC lib/event/scheduler_static.o 00:03:57.370 LIB libspdk_event.a 00:03:57.370 SO libspdk_event.so.13.0 00:03:57.370 SYMLINK libspdk_event.so 00:03:57.370 LIB libspdk_accel.a 00:03:57.370 SO libspdk_accel.so.15.0 00:03:57.629 SYMLINK libspdk_accel.so 00:03:57.629 LIB libspdk_nvme.a 00:03:57.629 CC lib/bdev/bdev.o 00:03:57.629 CC lib/bdev/bdev_rpc.o 00:03:57.629 CC lib/bdev/bdev_zone.o 00:03:57.629 CC lib/bdev/part.o 00:03:57.629 CC lib/bdev/scsi_nvme.o 00:03:57.629 SO libspdk_nvme.so.13.0 00:03:58.194 SYMLINK libspdk_nvme.so 00:03:59.568 LIB libspdk_blob.a 00:03:59.568 SO libspdk_blob.so.11.0 00:03:59.568 SYMLINK libspdk_blob.so 00:03:59.568 CC lib/blobfs/blobfs.o 00:03:59.568 CC lib/blobfs/tree.o 00:03:59.569 CC lib/lvol/lvol.o 00:04:00.511 LIB libspdk_bdev.a 00:04:00.511 SO libspdk_bdev.so.15.0 00:04:00.511 LIB libspdk_blobfs.a 00:04:00.511 SO libspdk_blobfs.so.10.0 00:04:00.511 LIB libspdk_lvol.a 00:04:00.511 SYMLINK libspdk_bdev.so 00:04:00.511 SYMLINK libspdk_blobfs.so 00:04:00.511 SO libspdk_lvol.so.10.0 00:04:00.511 SYMLINK libspdk_lvol.so 00:04:00.511 CC lib/nvmf/ctrlr.o 00:04:00.511 CC lib/nbd/nbd.o 00:04:00.511 CC lib/nvmf/ctrlr_discovery.o 00:04:00.511 CC lib/nbd/nbd_rpc.o 00:04:00.511 CC lib/ftl/ftl_core.o 00:04:00.511 CC lib/ftl/ftl_init.o 00:04:00.511 CC lib/nvmf/ctrlr_bdev.o 00:04:00.511 CC lib/ftl/ftl_layout.o 00:04:00.511 CC lib/ftl/ftl_debug.o 00:04:00.511 CC lib/nvmf/subsystem.o 00:04:00.512 CC lib/ftl/ftl_io.o 00:04:00.512 CC lib/scsi/dev.o 00:04:00.512 CC lib/nvmf/nvmf.o 00:04:00.512 CC lib/ublk/ublk.o 00:04:00.512 CC lib/ftl/ftl_sb.o 00:04:00.512 CC lib/scsi/lun.o 00:04:00.512 CC lib/ublk/ublk_rpc.o 00:04:00.512 CC lib/ftl/ftl_l2p.o 00:04:00.512 CC lib/nvmf/transport.o 00:04:00.512 CC lib/nvmf/nvmf_rpc.o 00:04:00.512 CC lib/ftl/ftl_l2p_flat.o 00:04:00.512 CC lib/scsi/port.o 00:04:00.512 CC lib/nvmf/tcp.o 00:04:00.512 CC lib/ftl/ftl_nv_cache.o 00:04:00.512 CC lib/scsi/scsi.o 00:04:00.512 CC lib/scsi/scsi_bdev.o 00:04:00.512 CC lib/nvmf/vfio_user.o 00:04:00.512 CC lib/ftl/ftl_band.o 00:04:00.512 CC lib/scsi/scsi_pr.o 00:04:00.512 CC lib/nvmf/rdma.o 00:04:00.512 CC lib/ftl/ftl_band_ops.o 00:04:00.512 CC lib/scsi/scsi_rpc.o 00:04:00.512 CC lib/ftl/ftl_writer.o 00:04:00.512 CC lib/ftl/ftl_rq.o 00:04:00.512 CC lib/scsi/task.o 00:04:00.512 CC lib/ftl/ftl_reloc.o 00:04:00.512 CC lib/ftl/ftl_l2p_cache.o 00:04:00.512 CC lib/ftl/ftl_p2l.o 00:04:00.512 CC lib/ftl/mngt/ftl_mngt.o 00:04:00.512 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:00.512 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:00.512 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:00.512 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:00.512 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:00.512 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:00.512 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:00.512 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:00.512 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:01.086 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:01.086 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:01.086 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:01.086 CC lib/ftl/utils/ftl_conf.o 00:04:01.086 CC lib/ftl/utils/ftl_md.o 00:04:01.086 CC lib/ftl/utils/ftl_mempool.o 00:04:01.086 CC lib/ftl/utils/ftl_bitmap.o 00:04:01.086 CC lib/ftl/utils/ftl_property.o 00:04:01.086 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:01.086 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:01.086 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:01.086 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:01.086 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:01.086 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:01.086 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:01.086 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:01.086 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:01.086 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:01.086 CC lib/ftl/base/ftl_base_dev.o 00:04:01.086 CC lib/ftl/base/ftl_base_bdev.o 00:04:01.086 CC lib/ftl/ftl_trace.o 00:04:01.343 LIB libspdk_nbd.a 00:04:01.343 SO libspdk_nbd.so.7.0 00:04:01.601 LIB libspdk_scsi.a 00:04:01.601 SYMLINK libspdk_nbd.so 00:04:01.601 SO libspdk_scsi.so.9.0 00:04:01.601 LIB libspdk_ublk.a 00:04:01.601 SYMLINK libspdk_scsi.so 00:04:01.601 SO libspdk_ublk.so.3.0 00:04:01.601 SYMLINK libspdk_ublk.so 00:04:01.858 CC lib/iscsi/conn.o 00:04:01.858 CC lib/vhost/vhost.o 00:04:01.858 CC lib/iscsi/init_grp.o 00:04:01.858 CC lib/vhost/vhost_rpc.o 00:04:01.858 CC lib/iscsi/iscsi.o 00:04:01.858 CC lib/vhost/vhost_scsi.o 00:04:01.858 CC lib/iscsi/md5.o 00:04:01.858 CC lib/vhost/vhost_blk.o 00:04:01.858 CC lib/iscsi/param.o 00:04:01.858 CC lib/vhost/rte_vhost_user.o 00:04:01.858 CC lib/iscsi/portal_grp.o 00:04:01.858 CC lib/iscsi/tgt_node.o 00:04:01.858 CC lib/iscsi/iscsi_subsystem.o 00:04:01.858 CC lib/iscsi/iscsi_rpc.o 00:04:01.858 CC lib/iscsi/task.o 00:04:01.858 LIB libspdk_ftl.a 00:04:02.116 SO libspdk_ftl.so.9.0 00:04:02.373 SYMLINK libspdk_ftl.so 00:04:02.936 LIB libspdk_vhost.a 00:04:03.194 SO libspdk_vhost.so.8.0 00:04:03.194 LIB libspdk_nvmf.a 00:04:03.194 SYMLINK libspdk_vhost.so 00:04:03.194 SO libspdk_nvmf.so.18.0 00:04:03.194 LIB libspdk_iscsi.a 00:04:03.194 SO libspdk_iscsi.so.8.0 00:04:03.452 SYMLINK libspdk_nvmf.so 00:04:03.452 SYMLINK libspdk_iscsi.so 00:04:03.711 CC module/env_dpdk/env_dpdk_rpc.o 00:04:03.711 CC module/vfu_device/vfu_virtio.o 00:04:03.711 CC module/vfu_device/vfu_virtio_blk.o 00:04:03.711 CC module/vfu_device/vfu_virtio_scsi.o 00:04:03.711 CC module/vfu_device/vfu_virtio_rpc.o 00:04:03.711 CC module/accel/ioat/accel_ioat.o 00:04:03.711 CC module/sock/posix/posix.o 00:04:03.711 CC module/accel/error/accel_error.o 00:04:03.711 CC module/blob/bdev/blob_bdev.o 00:04:03.711 CC module/accel/ioat/accel_ioat_rpc.o 00:04:03.711 CC module/accel/iaa/accel_iaa.o 00:04:03.711 CC module/accel/error/accel_error_rpc.o 00:04:03.711 CC module/accel/iaa/accel_iaa_rpc.o 00:04:03.711 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:03.711 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:03.711 CC module/scheduler/gscheduler/gscheduler.o 00:04:03.711 CC module/keyring/file/keyring.o 00:04:03.711 CC module/accel/dsa/accel_dsa.o 00:04:03.711 CC module/accel/dsa/accel_dsa_rpc.o 00:04:03.711 CC module/keyring/file/keyring_rpc.o 00:04:03.969 LIB libspdk_env_dpdk_rpc.a 00:04:03.969 SO libspdk_env_dpdk_rpc.so.6.0 00:04:03.969 SYMLINK libspdk_env_dpdk_rpc.so 00:04:03.969 LIB libspdk_keyring_file.a 00:04:03.969 LIB libspdk_scheduler_dpdk_governor.a 00:04:03.969 LIB libspdk_scheduler_gscheduler.a 00:04:03.969 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:03.969 SO libspdk_scheduler_gscheduler.so.4.0 00:04:03.969 SO libspdk_keyring_file.so.1.0 00:04:03.969 LIB libspdk_accel_error.a 00:04:03.969 LIB libspdk_scheduler_dynamic.a 00:04:03.969 LIB libspdk_accel_ioat.a 00:04:03.969 LIB libspdk_accel_iaa.a 00:04:03.969 SO libspdk_accel_error.so.2.0 00:04:03.969 SO libspdk_scheduler_dynamic.so.4.0 00:04:03.969 SO libspdk_accel_ioat.so.6.0 00:04:03.969 SYMLINK libspdk_scheduler_gscheduler.so 00:04:03.969 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:03.969 SYMLINK libspdk_keyring_file.so 00:04:03.969 SO libspdk_accel_iaa.so.3.0 00:04:03.969 LIB libspdk_accel_dsa.a 00:04:03.969 LIB libspdk_blob_bdev.a 00:04:03.969 SYMLINK libspdk_accel_error.so 00:04:03.969 SYMLINK libspdk_scheduler_dynamic.so 00:04:04.227 SO libspdk_accel_dsa.so.5.0 00:04:04.227 SYMLINK libspdk_accel_ioat.so 00:04:04.227 SO libspdk_blob_bdev.so.11.0 00:04:04.227 SYMLINK libspdk_accel_iaa.so 00:04:04.227 SYMLINK libspdk_accel_dsa.so 00:04:04.227 SYMLINK libspdk_blob_bdev.so 00:04:04.486 LIB libspdk_vfu_device.a 00:04:04.486 CC module/blobfs/bdev/blobfs_bdev.o 00:04:04.486 CC module/bdev/raid/bdev_raid.o 00:04:04.486 CC module/bdev/nvme/bdev_nvme.o 00:04:04.486 CC module/bdev/null/bdev_null.o 00:04:04.486 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:04.486 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:04.486 CC module/bdev/error/vbdev_error.o 00:04:04.486 CC module/bdev/raid/bdev_raid_rpc.o 00:04:04.486 CC module/bdev/null/bdev_null_rpc.o 00:04:04.486 CC module/bdev/gpt/gpt.o 00:04:04.486 CC module/bdev/nvme/nvme_rpc.o 00:04:04.486 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:04.486 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:04.486 CC module/bdev/aio/bdev_aio.o 00:04:04.486 CC module/bdev/error/vbdev_error_rpc.o 00:04:04.486 CC module/bdev/aio/bdev_aio_rpc.o 00:04:04.486 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:04.486 CC module/bdev/nvme/bdev_mdns_client.o 00:04:04.486 CC module/bdev/malloc/bdev_malloc.o 00:04:04.486 CC module/bdev/ftl/bdev_ftl.o 00:04:04.486 CC module/bdev/delay/vbdev_delay.o 00:04:04.486 CC module/bdev/raid/bdev_raid_sb.o 00:04:04.486 CC module/bdev/nvme/vbdev_opal.o 00:04:04.486 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:04.486 CC module/bdev/gpt/vbdev_gpt.o 00:04:04.486 CC module/bdev/raid/raid0.o 00:04:04.487 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:04.487 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:04.487 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:04.487 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:04.487 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:04.487 CC module/bdev/raid/raid1.o 00:04:04.487 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:04.487 CC module/bdev/split/vbdev_split.o 00:04:04.487 CC module/bdev/raid/concat.o 00:04:04.487 CC module/bdev/lvol/vbdev_lvol.o 00:04:04.487 SO libspdk_vfu_device.so.3.0 00:04:04.487 CC module/bdev/split/vbdev_split_rpc.o 00:04:04.487 CC module/bdev/iscsi/bdev_iscsi.o 00:04:04.487 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:04.487 CC module/bdev/passthru/vbdev_passthru.o 00:04:04.487 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:04.487 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:04.487 SYMLINK libspdk_vfu_device.so 00:04:04.745 LIB libspdk_sock_posix.a 00:04:04.745 SO libspdk_sock_posix.so.6.0 00:04:04.745 LIB libspdk_blobfs_bdev.a 00:04:04.745 SO libspdk_blobfs_bdev.so.6.0 00:04:05.004 LIB libspdk_bdev_passthru.a 00:04:05.004 LIB libspdk_bdev_null.a 00:04:05.004 SYMLINK libspdk_sock_posix.so 00:04:05.004 LIB libspdk_bdev_split.a 00:04:05.004 LIB libspdk_bdev_ftl.a 00:04:05.004 LIB libspdk_bdev_error.a 00:04:05.004 SYMLINK libspdk_blobfs_bdev.so 00:04:05.004 SO libspdk_bdev_passthru.so.6.0 00:04:05.004 SO libspdk_bdev_null.so.6.0 00:04:05.004 SO libspdk_bdev_split.so.6.0 00:04:05.004 SO libspdk_bdev_ftl.so.6.0 00:04:05.004 SO libspdk_bdev_error.so.6.0 00:04:05.004 LIB libspdk_bdev_gpt.a 00:04:05.004 SO libspdk_bdev_gpt.so.6.0 00:04:05.004 SYMLINK libspdk_bdev_passthru.so 00:04:05.004 LIB libspdk_bdev_aio.a 00:04:05.004 SYMLINK libspdk_bdev_null.so 00:04:05.004 SYMLINK libspdk_bdev_split.so 00:04:05.004 SYMLINK libspdk_bdev_error.so 00:04:05.004 SYMLINK libspdk_bdev_ftl.so 00:04:05.004 SO libspdk_bdev_aio.so.6.0 00:04:05.004 LIB libspdk_bdev_zone_block.a 00:04:05.004 LIB libspdk_bdev_iscsi.a 00:04:05.004 SYMLINK libspdk_bdev_gpt.so 00:04:05.004 LIB libspdk_bdev_delay.a 00:04:05.004 LIB libspdk_bdev_malloc.a 00:04:05.004 SO libspdk_bdev_iscsi.so.6.0 00:04:05.004 SO libspdk_bdev_zone_block.so.6.0 00:04:05.004 SO libspdk_bdev_delay.so.6.0 00:04:05.004 SYMLINK libspdk_bdev_aio.so 00:04:05.004 SO libspdk_bdev_malloc.so.6.0 00:04:05.004 SYMLINK libspdk_bdev_zone_block.so 00:04:05.004 SYMLINK libspdk_bdev_iscsi.so 00:04:05.004 SYMLINK libspdk_bdev_delay.so 00:04:05.262 SYMLINK libspdk_bdev_malloc.so 00:04:05.262 LIB libspdk_bdev_lvol.a 00:04:05.262 LIB libspdk_bdev_virtio.a 00:04:05.262 SO libspdk_bdev_lvol.so.6.0 00:04:05.262 SO libspdk_bdev_virtio.so.6.0 00:04:05.262 SYMLINK libspdk_bdev_lvol.so 00:04:05.262 SYMLINK libspdk_bdev_virtio.so 00:04:05.519 LIB libspdk_bdev_raid.a 00:04:05.519 SO libspdk_bdev_raid.so.6.0 00:04:05.777 SYMLINK libspdk_bdev_raid.so 00:04:06.710 LIB libspdk_bdev_nvme.a 00:04:06.968 SO libspdk_bdev_nvme.so.7.0 00:04:06.968 SYMLINK libspdk_bdev_nvme.so 00:04:07.225 CC module/event/subsystems/keyring/keyring.o 00:04:07.225 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:07.225 CC module/event/subsystems/scheduler/scheduler.o 00:04:07.225 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:07.225 CC module/event/subsystems/iobuf/iobuf.o 00:04:07.225 CC module/event/subsystems/sock/sock.o 00:04:07.225 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:07.225 CC module/event/subsystems/vmd/vmd.o 00:04:07.225 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:07.483 LIB libspdk_event_keyring.a 00:04:07.483 LIB libspdk_event_sock.a 00:04:07.483 LIB libspdk_event_vhost_blk.a 00:04:07.483 LIB libspdk_event_scheduler.a 00:04:07.483 LIB libspdk_event_vfu_tgt.a 00:04:07.483 LIB libspdk_event_vmd.a 00:04:07.483 LIB libspdk_event_iobuf.a 00:04:07.483 SO libspdk_event_keyring.so.1.0 00:04:07.483 SO libspdk_event_sock.so.5.0 00:04:07.483 SO libspdk_event_vfu_tgt.so.3.0 00:04:07.483 SO libspdk_event_vhost_blk.so.3.0 00:04:07.483 SO libspdk_event_scheduler.so.4.0 00:04:07.483 SO libspdk_event_vmd.so.6.0 00:04:07.483 SO libspdk_event_iobuf.so.3.0 00:04:07.483 SYMLINK libspdk_event_sock.so 00:04:07.483 SYMLINK libspdk_event_keyring.so 00:04:07.483 SYMLINK libspdk_event_vhost_blk.so 00:04:07.483 SYMLINK libspdk_event_vfu_tgt.so 00:04:07.483 SYMLINK libspdk_event_scheduler.so 00:04:07.483 SYMLINK libspdk_event_vmd.so 00:04:07.483 SYMLINK libspdk_event_iobuf.so 00:04:07.740 CC module/event/subsystems/accel/accel.o 00:04:07.998 LIB libspdk_event_accel.a 00:04:07.998 SO libspdk_event_accel.so.6.0 00:04:07.998 SYMLINK libspdk_event_accel.so 00:04:08.255 CC module/event/subsystems/bdev/bdev.o 00:04:08.255 LIB libspdk_event_bdev.a 00:04:08.255 SO libspdk_event_bdev.so.6.0 00:04:08.513 SYMLINK libspdk_event_bdev.so 00:04:08.513 CC module/event/subsystems/ublk/ublk.o 00:04:08.513 CC module/event/subsystems/nbd/nbd.o 00:04:08.513 CC module/event/subsystems/scsi/scsi.o 00:04:08.513 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:08.513 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:08.771 LIB libspdk_event_nbd.a 00:04:08.771 LIB libspdk_event_ublk.a 00:04:08.771 LIB libspdk_event_scsi.a 00:04:08.771 SO libspdk_event_nbd.so.6.0 00:04:08.771 SO libspdk_event_ublk.so.3.0 00:04:08.771 SO libspdk_event_scsi.so.6.0 00:04:08.771 SYMLINK libspdk_event_nbd.so 00:04:08.771 SYMLINK libspdk_event_ublk.so 00:04:08.771 SYMLINK libspdk_event_scsi.so 00:04:08.771 LIB libspdk_event_nvmf.a 00:04:08.771 SO libspdk_event_nvmf.so.6.0 00:04:08.771 SYMLINK libspdk_event_nvmf.so 00:04:09.029 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:09.029 CC module/event/subsystems/iscsi/iscsi.o 00:04:09.029 LIB libspdk_event_vhost_scsi.a 00:04:09.029 SO libspdk_event_vhost_scsi.so.3.0 00:04:09.029 LIB libspdk_event_iscsi.a 00:04:09.029 SO libspdk_event_iscsi.so.6.0 00:04:09.287 SYMLINK libspdk_event_vhost_scsi.so 00:04:09.287 SYMLINK libspdk_event_iscsi.so 00:04:09.287 SO libspdk.so.6.0 00:04:09.287 SYMLINK libspdk.so 00:04:09.550 CXX app/trace/trace.o 00:04:09.550 TEST_HEADER include/spdk/accel.h 00:04:09.550 CC app/trace_record/trace_record.o 00:04:09.550 TEST_HEADER include/spdk/accel_module.h 00:04:09.550 CC app/spdk_nvme_discover/discovery_aer.o 00:04:09.550 CC app/spdk_top/spdk_top.o 00:04:09.550 TEST_HEADER include/spdk/assert.h 00:04:09.550 CC test/rpc_client/rpc_client_test.o 00:04:09.550 TEST_HEADER include/spdk/barrier.h 00:04:09.550 CC app/spdk_lspci/spdk_lspci.o 00:04:09.550 TEST_HEADER include/spdk/base64.h 00:04:09.550 CC app/spdk_nvme_identify/identify.o 00:04:09.550 CC app/spdk_nvme_perf/perf.o 00:04:09.550 TEST_HEADER include/spdk/bdev.h 00:04:09.550 TEST_HEADER include/spdk/bdev_module.h 00:04:09.550 TEST_HEADER include/spdk/bdev_zone.h 00:04:09.550 TEST_HEADER include/spdk/bit_array.h 00:04:09.550 TEST_HEADER include/spdk/bit_pool.h 00:04:09.550 TEST_HEADER include/spdk/blob_bdev.h 00:04:09.550 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:09.550 TEST_HEADER include/spdk/blobfs.h 00:04:09.550 TEST_HEADER include/spdk/blob.h 00:04:09.550 TEST_HEADER include/spdk/conf.h 00:04:09.550 TEST_HEADER include/spdk/config.h 00:04:09.550 TEST_HEADER include/spdk/cpuset.h 00:04:09.550 TEST_HEADER include/spdk/crc16.h 00:04:09.550 TEST_HEADER include/spdk/crc32.h 00:04:09.550 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:09.550 TEST_HEADER include/spdk/crc64.h 00:04:09.550 TEST_HEADER include/spdk/dif.h 00:04:09.550 TEST_HEADER include/spdk/dma.h 00:04:09.550 CC app/spdk_dd/spdk_dd.o 00:04:09.550 TEST_HEADER include/spdk/endian.h 00:04:09.550 TEST_HEADER include/spdk/env_dpdk.h 00:04:09.550 TEST_HEADER include/spdk/env.h 00:04:09.550 TEST_HEADER include/spdk/event.h 00:04:09.550 CC app/iscsi_tgt/iscsi_tgt.o 00:04:09.550 CC app/vhost/vhost.o 00:04:09.550 TEST_HEADER include/spdk/fd_group.h 00:04:09.550 CC app/nvmf_tgt/nvmf_main.o 00:04:09.550 TEST_HEADER include/spdk/fd.h 00:04:09.550 TEST_HEADER include/spdk/file.h 00:04:09.550 TEST_HEADER include/spdk/ftl.h 00:04:09.550 TEST_HEADER include/spdk/gpt_spec.h 00:04:09.550 TEST_HEADER include/spdk/hexlify.h 00:04:09.550 TEST_HEADER include/spdk/histogram_data.h 00:04:09.550 TEST_HEADER include/spdk/idxd.h 00:04:09.550 TEST_HEADER include/spdk/idxd_spec.h 00:04:09.550 TEST_HEADER include/spdk/init.h 00:04:09.550 TEST_HEADER include/spdk/ioat.h 00:04:09.550 CC test/nvme/e2edp/nvme_dp.o 00:04:09.550 CC examples/nvme/reconnect/reconnect.o 00:04:09.550 CC app/spdk_tgt/spdk_tgt.o 00:04:09.550 CC examples/nvme/abort/abort.o 00:04:09.550 CC examples/nvme/hello_world/hello_world.o 00:04:09.550 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:09.550 CC examples/nvme/hotplug/hotplug.o 00:04:09.550 CC examples/nvme/arbitration/arbitration.o 00:04:09.550 CC examples/sock/hello_world/hello_sock.o 00:04:09.550 CC test/nvme/aer/aer.o 00:04:09.550 TEST_HEADER include/spdk/ioat_spec.h 00:04:09.550 CC test/nvme/err_injection/err_injection.o 00:04:09.817 CC test/nvme/reset/reset.o 00:04:09.817 CC examples/accel/perf/accel_perf.o 00:04:09.817 TEST_HEADER include/spdk/iscsi_spec.h 00:04:09.817 CC examples/idxd/perf/perf.o 00:04:09.817 CC test/thread/poller_perf/poller_perf.o 00:04:09.817 TEST_HEADER include/spdk/json.h 00:04:09.817 TEST_HEADER include/spdk/jsonrpc.h 00:04:09.817 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:09.817 TEST_HEADER include/spdk/keyring.h 00:04:09.817 CC examples/ioat/perf/perf.o 00:04:09.817 CC app/fio/nvme/fio_plugin.o 00:04:09.817 CC test/nvme/overhead/overhead.o 00:04:09.817 TEST_HEADER include/spdk/keyring_module.h 00:04:09.817 CC examples/vmd/lsvmd/lsvmd.o 00:04:09.817 TEST_HEADER include/spdk/likely.h 00:04:09.817 CC test/event/event_perf/event_perf.o 00:04:09.817 CC examples/util/zipf/zipf.o 00:04:09.817 TEST_HEADER include/spdk/log.h 00:04:09.817 CC test/nvme/sgl/sgl.o 00:04:09.817 TEST_HEADER include/spdk/lvol.h 00:04:09.817 TEST_HEADER include/spdk/memory.h 00:04:09.817 TEST_HEADER include/spdk/mmio.h 00:04:09.817 TEST_HEADER include/spdk/nbd.h 00:04:09.817 TEST_HEADER include/spdk/notify.h 00:04:09.817 TEST_HEADER include/spdk/nvme.h 00:04:09.817 TEST_HEADER include/spdk/nvme_intel.h 00:04:09.817 CC test/blobfs/mkfs/mkfs.o 00:04:09.817 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:09.817 CC test/accel/dif/dif.o 00:04:09.817 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:09.817 TEST_HEADER include/spdk/nvme_spec.h 00:04:09.817 CC test/dma/test_dma/test_dma.o 00:04:09.817 TEST_HEADER include/spdk/nvme_zns.h 00:04:09.817 CC test/bdev/bdevio/bdevio.o 00:04:09.817 CC examples/nvmf/nvmf/nvmf.o 00:04:09.817 CC examples/blob/hello_world/hello_blob.o 00:04:09.817 CC examples/bdev/hello_world/hello_bdev.o 00:04:09.817 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:09.817 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:09.817 TEST_HEADER include/spdk/nvmf.h 00:04:09.817 TEST_HEADER include/spdk/nvmf_spec.h 00:04:09.817 TEST_HEADER include/spdk/nvmf_transport.h 00:04:09.817 CC examples/thread/thread/thread_ex.o 00:04:09.817 TEST_HEADER include/spdk/opal.h 00:04:09.817 TEST_HEADER include/spdk/opal_spec.h 00:04:09.817 CC test/app/bdev_svc/bdev_svc.o 00:04:09.817 TEST_HEADER include/spdk/pci_ids.h 00:04:09.817 TEST_HEADER include/spdk/pipe.h 00:04:09.817 TEST_HEADER include/spdk/queue.h 00:04:09.817 TEST_HEADER include/spdk/reduce.h 00:04:09.817 TEST_HEADER include/spdk/rpc.h 00:04:09.817 TEST_HEADER include/spdk/scheduler.h 00:04:09.817 TEST_HEADER include/spdk/scsi.h 00:04:09.817 TEST_HEADER include/spdk/scsi_spec.h 00:04:09.817 TEST_HEADER include/spdk/sock.h 00:04:09.817 TEST_HEADER include/spdk/stdinc.h 00:04:09.817 TEST_HEADER include/spdk/string.h 00:04:09.817 TEST_HEADER include/spdk/thread.h 00:04:09.817 LINK spdk_lspci 00:04:09.817 TEST_HEADER include/spdk/trace.h 00:04:09.817 TEST_HEADER include/spdk/trace_parser.h 00:04:09.817 TEST_HEADER include/spdk/tree.h 00:04:09.817 TEST_HEADER include/spdk/ublk.h 00:04:09.817 CC test/env/mem_callbacks/mem_callbacks.o 00:04:09.817 TEST_HEADER include/spdk/util.h 00:04:09.817 TEST_HEADER include/spdk/uuid.h 00:04:09.817 TEST_HEADER include/spdk/version.h 00:04:09.817 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:09.817 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:09.817 TEST_HEADER include/spdk/vhost.h 00:04:09.817 CC test/lvol/esnap/esnap.o 00:04:09.817 TEST_HEADER include/spdk/vmd.h 00:04:09.817 TEST_HEADER include/spdk/xor.h 00:04:09.817 TEST_HEADER include/spdk/zipf.h 00:04:09.817 CXX test/cpp_headers/accel.o 00:04:09.817 LINK rpc_client_test 00:04:10.082 LINK spdk_nvme_discover 00:04:10.082 LINK interrupt_tgt 00:04:10.082 LINK lsvmd 00:04:10.082 LINK poller_perf 00:04:10.082 LINK nvmf_tgt 00:04:10.082 LINK event_perf 00:04:10.082 LINK vhost 00:04:10.082 LINK zipf 00:04:10.082 LINK spdk_trace_record 00:04:10.082 LINK iscsi_tgt 00:04:10.082 LINK cmb_copy 00:04:10.082 LINK err_injection 00:04:10.082 LINK spdk_tgt 00:04:10.082 LINK ioat_perf 00:04:10.082 LINK hello_world 00:04:10.082 LINK mkfs 00:04:10.082 LINK hello_sock 00:04:10.082 LINK hotplug 00:04:10.082 LINK bdev_svc 00:04:10.341 LINK nvme_dp 00:04:10.341 LINK hello_blob 00:04:10.341 LINK reset 00:04:10.341 LINK sgl 00:04:10.341 LINK aer 00:04:10.341 LINK hello_bdev 00:04:10.341 LINK thread 00:04:10.341 CXX test/cpp_headers/accel_module.o 00:04:10.341 LINK overhead 00:04:10.341 LINK spdk_dd 00:04:10.341 LINK arbitration 00:04:10.341 LINK idxd_perf 00:04:10.341 LINK nvmf 00:04:10.341 LINK reconnect 00:04:10.341 CC examples/vmd/led/led.o 00:04:10.341 CC examples/bdev/bdevperf/bdevperf.o 00:04:10.342 LINK spdk_trace 00:04:10.342 CXX test/cpp_headers/assert.o 00:04:10.342 LINK abort 00:04:10.342 CC examples/ioat/verify/verify.o 00:04:10.602 CC test/event/reactor/reactor.o 00:04:10.602 CC test/event/reactor_perf/reactor_perf.o 00:04:10.602 LINK bdevio 00:04:10.602 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:10.602 LINK dif 00:04:10.602 LINK test_dma 00:04:10.602 CC test/nvme/startup/startup.o 00:04:10.602 LINK accel_perf 00:04:10.602 CC test/event/app_repeat/app_repeat.o 00:04:10.602 CC examples/blob/cli/blobcli.o 00:04:10.602 CXX test/cpp_headers/barrier.o 00:04:10.602 CC test/nvme/reserve/reserve.o 00:04:10.602 CC test/event/scheduler/scheduler.o 00:04:10.602 CC test/nvme/simple_copy/simple_copy.o 00:04:10.602 LINK nvme_manage 00:04:10.602 CC test/app/histogram_perf/histogram_perf.o 00:04:10.602 CC test/app/jsoncat/jsoncat.o 00:04:10.602 CC test/env/vtophys/vtophys.o 00:04:10.602 CC test/nvme/connect_stress/connect_stress.o 00:04:10.602 CXX test/cpp_headers/base64.o 00:04:10.602 CXX test/cpp_headers/bdev.o 00:04:10.864 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:10.864 LINK spdk_nvme 00:04:10.864 LINK led 00:04:10.864 CC test/app/stub/stub.o 00:04:10.864 CC test/env/memory/memory_ut.o 00:04:10.864 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:10.864 CXX test/cpp_headers/bdev_module.o 00:04:10.864 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:10.864 LINK reactor 00:04:10.864 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:10.864 CC test/env/pci/pci_ut.o 00:04:10.864 LINK reactor_perf 00:04:10.864 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:10.864 CC app/fio/bdev/fio_plugin.o 00:04:10.864 CXX test/cpp_headers/bdev_zone.o 00:04:10.864 CC test/nvme/boot_partition/boot_partition.o 00:04:10.864 CC test/nvme/compliance/nvme_compliance.o 00:04:10.864 LINK app_repeat 00:04:10.864 LINK pmr_persistence 00:04:10.864 CXX test/cpp_headers/bit_array.o 00:04:10.864 CC test/nvme/fused_ordering/fused_ordering.o 00:04:10.864 LINK startup 00:04:10.864 CXX test/cpp_headers/bit_pool.o 00:04:10.864 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:10.864 CC test/nvme/fdp/fdp.o 00:04:10.864 CXX test/cpp_headers/blob_bdev.o 00:04:10.864 CC test/nvme/cuse/cuse.o 00:04:10.864 LINK verify 00:04:10.864 CXX test/cpp_headers/blobfs_bdev.o 00:04:10.864 LINK jsoncat 00:04:10.864 CXX test/cpp_headers/blobfs.o 00:04:10.864 CXX test/cpp_headers/blob.o 00:04:10.864 CXX test/cpp_headers/conf.o 00:04:10.864 LINK vtophys 00:04:10.864 LINK histogram_perf 00:04:10.864 CXX test/cpp_headers/config.o 00:04:11.133 CXX test/cpp_headers/cpuset.o 00:04:11.133 LINK connect_stress 00:04:11.133 LINK reserve 00:04:11.133 CXX test/cpp_headers/crc16.o 00:04:11.133 CXX test/cpp_headers/crc64.o 00:04:11.133 CXX test/cpp_headers/crc32.o 00:04:11.133 CXX test/cpp_headers/dif.o 00:04:11.133 LINK env_dpdk_post_init 00:04:11.133 CXX test/cpp_headers/dma.o 00:04:11.133 LINK mem_callbacks 00:04:11.133 LINK spdk_nvme_perf 00:04:11.133 LINK simple_copy 00:04:11.133 LINK scheduler 00:04:11.133 LINK stub 00:04:11.133 CXX test/cpp_headers/endian.o 00:04:11.133 CXX test/cpp_headers/env_dpdk.o 00:04:11.133 LINK boot_partition 00:04:11.133 CXX test/cpp_headers/env.o 00:04:11.133 CXX test/cpp_headers/event.o 00:04:11.133 CXX test/cpp_headers/fd_group.o 00:04:11.133 CXX test/cpp_headers/fd.o 00:04:11.133 LINK spdk_nvme_identify 00:04:11.133 LINK spdk_top 00:04:11.133 CXX test/cpp_headers/file.o 00:04:11.133 CXX test/cpp_headers/ftl.o 00:04:11.133 CXX test/cpp_headers/gpt_spec.o 00:04:11.133 CXX test/cpp_headers/hexlify.o 00:04:11.397 CXX test/cpp_headers/histogram_data.o 00:04:11.397 CXX test/cpp_headers/idxd.o 00:04:11.397 LINK fused_ordering 00:04:11.397 CXX test/cpp_headers/idxd_spec.o 00:04:11.397 CXX test/cpp_headers/init.o 00:04:11.397 LINK doorbell_aers 00:04:11.397 CXX test/cpp_headers/ioat.o 00:04:11.397 CXX test/cpp_headers/ioat_spec.o 00:04:11.397 CXX test/cpp_headers/iscsi_spec.o 00:04:11.397 CXX test/cpp_headers/json.o 00:04:11.397 CXX test/cpp_headers/jsonrpc.o 00:04:11.397 CXX test/cpp_headers/keyring.o 00:04:11.397 CXX test/cpp_headers/keyring_module.o 00:04:11.397 CXX test/cpp_headers/likely.o 00:04:11.397 CXX test/cpp_headers/log.o 00:04:11.397 CXX test/cpp_headers/lvol.o 00:04:11.397 CXX test/cpp_headers/memory.o 00:04:11.397 CXX test/cpp_headers/mmio.o 00:04:11.397 CXX test/cpp_headers/nbd.o 00:04:11.397 CXX test/cpp_headers/notify.o 00:04:11.397 CXX test/cpp_headers/nvme.o 00:04:11.397 CXX test/cpp_headers/nvme_intel.o 00:04:11.397 LINK nvme_compliance 00:04:11.397 CXX test/cpp_headers/nvme_ocssd.o 00:04:11.397 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:11.397 LINK blobcli 00:04:11.397 CXX test/cpp_headers/nvme_spec.o 00:04:11.397 CXX test/cpp_headers/nvme_zns.o 00:04:11.656 CXX test/cpp_headers/nvmf_cmd.o 00:04:11.656 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:11.656 CXX test/cpp_headers/nvmf.o 00:04:11.656 LINK pci_ut 00:04:11.656 LINK fdp 00:04:11.656 LINK nvme_fuzz 00:04:11.656 CXX test/cpp_headers/nvmf_spec.o 00:04:11.656 CXX test/cpp_headers/nvmf_transport.o 00:04:11.656 CXX test/cpp_headers/opal.o 00:04:11.656 CXX test/cpp_headers/opal_spec.o 00:04:11.656 CXX test/cpp_headers/pci_ids.o 00:04:11.656 LINK vhost_fuzz 00:04:11.656 CXX test/cpp_headers/pipe.o 00:04:11.656 CXX test/cpp_headers/queue.o 00:04:11.656 CXX test/cpp_headers/reduce.o 00:04:11.656 CXX test/cpp_headers/rpc.o 00:04:11.656 CXX test/cpp_headers/scheduler.o 00:04:11.656 CXX test/cpp_headers/scsi.o 00:04:11.656 CXX test/cpp_headers/scsi_spec.o 00:04:11.656 CXX test/cpp_headers/sock.o 00:04:11.656 CXX test/cpp_headers/stdinc.o 00:04:11.656 CXX test/cpp_headers/string.o 00:04:11.656 CXX test/cpp_headers/thread.o 00:04:11.656 CXX test/cpp_headers/trace.o 00:04:11.656 CXX test/cpp_headers/trace_parser.o 00:04:11.656 CXX test/cpp_headers/tree.o 00:04:11.916 CXX test/cpp_headers/ublk.o 00:04:11.916 CXX test/cpp_headers/util.o 00:04:11.916 LINK spdk_bdev 00:04:11.916 CXX test/cpp_headers/uuid.o 00:04:11.916 CXX test/cpp_headers/version.o 00:04:11.916 CXX test/cpp_headers/vfio_user_pci.o 00:04:11.916 CXX test/cpp_headers/vfio_user_spec.o 00:04:11.916 CXX test/cpp_headers/vhost.o 00:04:11.916 CXX test/cpp_headers/vmd.o 00:04:11.916 CXX test/cpp_headers/xor.o 00:04:11.916 CXX test/cpp_headers/zipf.o 00:04:11.916 LINK bdevperf 00:04:12.174 LINK memory_ut 00:04:12.432 LINK cuse 00:04:12.998 LINK iscsi_fuzz 00:04:15.527 LINK esnap 00:04:15.786 00:04:15.786 real 0m40.037s 00:04:15.786 user 7m27.354s 00:04:15.786 sys 1m49.648s 00:04:15.786 06:30:20 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:04:15.786 06:30:20 -- common/autotest_common.sh@10 -- $ set +x 00:04:15.786 ************************************ 00:04:15.786 END TEST make 00:04:15.786 ************************************ 00:04:15.786 06:30:20 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:15.786 06:30:20 -- pm/common@30 -- $ signal_monitor_resources TERM 00:04:15.786 06:30:20 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:04:15.786 06:30:20 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:15.786 06:30:20 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:15.786 06:30:20 -- pm/common@45 -- $ pid=3953096 00:04:15.786 06:30:20 -- pm/common@52 -- $ sudo kill -TERM 3953096 00:04:15.786 06:30:20 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:15.786 06:30:20 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:15.786 06:30:20 -- pm/common@45 -- $ pid=3953093 00:04:15.786 06:30:20 -- pm/common@52 -- $ sudo kill -TERM 3953093 00:04:15.786 06:30:20 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:15.786 06:30:20 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:15.786 06:30:20 -- pm/common@45 -- $ pid=3953095 00:04:15.786 06:30:20 -- pm/common@52 -- $ sudo kill -TERM 3953095 00:04:15.786 06:30:20 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:15.786 06:30:20 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:15.786 06:30:20 -- pm/common@45 -- $ pid=3953094 00:04:15.786 06:30:20 -- pm/common@52 -- $ sudo kill -TERM 3953094 00:04:15.786 06:30:20 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:15.786 06:30:20 -- nvmf/common.sh@7 -- # uname -s 00:04:15.786 06:30:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:15.786 06:30:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:15.786 06:30:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:15.786 06:30:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:15.786 06:30:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:15.786 06:30:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:15.786 06:30:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:15.786 06:30:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:15.786 06:30:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:15.786 06:30:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:15.786 06:30:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:15.786 06:30:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:15.786 06:30:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:15.786 06:30:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:15.786 06:30:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:15.786 06:30:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:15.786 06:30:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:15.786 06:30:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:15.786 06:30:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:15.786 06:30:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:15.786 06:30:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.786 06:30:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.786 06:30:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.786 06:30:20 -- paths/export.sh@5 -- # export PATH 00:04:15.786 06:30:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.786 06:30:20 -- nvmf/common.sh@47 -- # : 0 00:04:15.786 06:30:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:16.045 06:30:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:16.045 06:30:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:16.045 06:30:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:16.045 06:30:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:16.045 06:30:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:16.045 06:30:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:16.045 06:30:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:16.045 06:30:20 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:16.045 06:30:20 -- spdk/autotest.sh@32 -- # uname -s 00:04:16.045 06:30:20 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:16.045 06:30:20 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:16.045 06:30:20 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:16.045 06:30:20 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:16.045 06:30:20 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:16.045 06:30:20 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:16.045 06:30:20 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:16.045 06:30:20 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:16.045 06:30:20 -- spdk/autotest.sh@48 -- # udevadm_pid=4029382 00:04:16.045 06:30:20 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:16.045 06:30:20 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:16.045 06:30:20 -- pm/common@17 -- # local monitor 00:04:16.045 06:30:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.045 06:30:20 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=4029384 00:04:16.045 06:30:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.045 06:30:20 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=4029387 00:04:16.045 06:30:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.045 06:30:20 -- pm/common@21 -- # date +%s 00:04:16.045 06:30:20 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=4029389 00:04:16.045 06:30:20 -- pm/common@21 -- # date +%s 00:04:16.045 06:30:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.045 06:30:20 -- pm/common@21 -- # date +%s 00:04:16.045 06:30:20 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=4029393 00:04:16.045 06:30:20 -- pm/common@26 -- # sleep 1 00:04:16.045 06:30:20 -- pm/common@21 -- # date +%s 00:04:16.045 06:30:20 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713328220 00:04:16.045 06:30:20 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713328220 00:04:16.045 06:30:20 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713328220 00:04:16.045 06:30:20 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713328220 00:04:16.045 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713328220_collect-vmstat.pm.log 00:04:16.045 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713328220_collect-bmc-pm.bmc.pm.log 00:04:16.045 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713328220_collect-cpu-load.pm.log 00:04:16.045 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713328220_collect-cpu-temp.pm.log 00:04:16.980 06:30:21 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:16.980 06:30:21 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:16.980 06:30:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:16.980 06:30:21 -- common/autotest_common.sh@10 -- # set +x 00:04:16.980 06:30:21 -- spdk/autotest.sh@59 -- # create_test_list 00:04:16.980 06:30:21 -- common/autotest_common.sh@734 -- # xtrace_disable 00:04:16.980 06:30:21 -- common/autotest_common.sh@10 -- # set +x 00:04:16.980 06:30:21 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:16.980 06:30:21 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:16.980 06:30:21 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:16.980 06:30:21 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:16.980 06:30:21 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:16.980 06:30:21 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:16.980 06:30:21 -- common/autotest_common.sh@1441 -- # uname 00:04:16.980 06:30:21 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:04:16.980 06:30:21 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:16.980 06:30:21 -- common/autotest_common.sh@1461 -- # uname 00:04:16.980 06:30:21 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:04:16.980 06:30:21 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:16.980 06:30:21 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:16.980 06:30:21 -- spdk/autotest.sh@72 -- # hash lcov 00:04:16.980 06:30:21 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:16.980 06:30:21 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:16.980 --rc lcov_branch_coverage=1 00:04:16.980 --rc lcov_function_coverage=1 00:04:16.980 --rc genhtml_branch_coverage=1 00:04:16.980 --rc genhtml_function_coverage=1 00:04:16.980 --rc genhtml_legend=1 00:04:16.980 --rc geninfo_all_blocks=1 00:04:16.980 ' 00:04:16.980 06:30:21 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:16.980 --rc lcov_branch_coverage=1 00:04:16.980 --rc lcov_function_coverage=1 00:04:16.980 --rc genhtml_branch_coverage=1 00:04:16.980 --rc genhtml_function_coverage=1 00:04:16.980 --rc genhtml_legend=1 00:04:16.980 --rc geninfo_all_blocks=1 00:04:16.980 ' 00:04:16.980 06:30:21 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:16.980 --rc lcov_branch_coverage=1 00:04:16.980 --rc lcov_function_coverage=1 00:04:16.980 --rc genhtml_branch_coverage=1 00:04:16.980 --rc genhtml_function_coverage=1 00:04:16.980 --rc genhtml_legend=1 00:04:16.980 --rc geninfo_all_blocks=1 00:04:16.980 --no-external' 00:04:16.980 06:30:21 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:16.980 --rc lcov_branch_coverage=1 00:04:16.980 --rc lcov_function_coverage=1 00:04:16.980 --rc genhtml_branch_coverage=1 00:04:16.980 --rc genhtml_function_coverage=1 00:04:16.980 --rc genhtml_legend=1 00:04:16.980 --rc geninfo_all_blocks=1 00:04:16.980 --no-external' 00:04:16.980 06:30:21 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:16.980 lcov: LCOV version 1.14 00:04:16.980 06:30:21 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:29.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:29.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:30.546 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:30.546 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:30.546 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:30.546 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:30.546 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:30.546 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:48.670 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:48.670 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:48.670 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:48.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:49.239 06:30:53 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:49.239 06:30:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:49.239 06:30:53 -- common/autotest_common.sh@10 -- # set +x 00:04:49.239 06:30:53 -- spdk/autotest.sh@91 -- # rm -f 00:04:49.239 06:30:53 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:50.171 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:50.171 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:50.171 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:50.171 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:50.171 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:50.171 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:50.171 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:50.428 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:50.428 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:50.428 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:50.428 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:50.428 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:50.428 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:50.428 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:50.428 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:50.428 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:50.428 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:50.428 06:30:54 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:50.428 06:30:54 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:50.428 06:30:54 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:50.428 06:30:54 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:50.428 06:30:54 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:50.428 06:30:54 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:50.428 06:30:54 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:50.428 06:30:54 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:50.428 06:30:54 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:50.428 06:30:54 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:50.428 06:30:54 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:50.428 06:30:54 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:50.428 06:30:54 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:50.428 06:30:54 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:50.428 06:30:54 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:50.428 No valid GPT data, bailing 00:04:50.428 06:30:55 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:50.685 06:30:55 -- scripts/common.sh@391 -- # pt= 00:04:50.685 06:30:55 -- scripts/common.sh@392 -- # return 1 00:04:50.685 06:30:55 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:50.685 1+0 records in 00:04:50.685 1+0 records out 00:04:50.685 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00177473 s, 591 MB/s 00:04:50.685 06:30:55 -- spdk/autotest.sh@118 -- # sync 00:04:50.685 06:30:55 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:50.685 06:30:55 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:50.685 06:30:55 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:52.584 06:30:56 -- spdk/autotest.sh@124 -- # uname -s 00:04:52.584 06:30:56 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:52.584 06:30:56 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:52.584 06:30:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.584 06:30:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.584 06:30:56 -- common/autotest_common.sh@10 -- # set +x 00:04:52.584 ************************************ 00:04:52.584 START TEST setup.sh 00:04:52.584 ************************************ 00:04:52.584 06:30:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:52.584 * Looking for test storage... 00:04:52.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:52.584 06:30:56 -- setup/test-setup.sh@10 -- # uname -s 00:04:52.584 06:30:56 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:52.584 06:30:56 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:52.584 06:30:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.584 06:30:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.584 06:30:56 -- common/autotest_common.sh@10 -- # set +x 00:04:52.584 ************************************ 00:04:52.584 START TEST acl 00:04:52.584 ************************************ 00:04:52.584 06:30:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:52.584 * Looking for test storage... 00:04:52.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:52.584 06:30:57 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:52.584 06:30:57 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:52.584 06:30:57 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:52.584 06:30:57 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:52.584 06:30:57 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:52.584 06:30:57 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:52.584 06:30:57 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:52.584 06:30:57 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:52.584 06:30:57 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:52.584 06:30:57 -- setup/acl.sh@12 -- # devs=() 00:04:52.584 06:30:57 -- setup/acl.sh@12 -- # declare -a devs 00:04:52.584 06:30:57 -- setup/acl.sh@13 -- # drivers=() 00:04:52.584 06:30:57 -- setup/acl.sh@13 -- # declare -A drivers 00:04:52.584 06:30:57 -- setup/acl.sh@51 -- # setup reset 00:04:52.584 06:30:57 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:52.584 06:30:57 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:53.960 06:30:58 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:53.960 06:30:58 -- setup/acl.sh@16 -- # local dev driver 00:04:53.960 06:30:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.960 06:30:58 -- setup/acl.sh@15 -- # setup output status 00:04:53.960 06:30:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.960 06:30:58 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:54.892 Hugepages 00:04:54.892 node hugesize free / total 00:04:54.892 06:30:59 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:54.892 06:30:59 -- setup/acl.sh@19 -- # continue 00:04:54.892 06:30:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:54.892 06:30:59 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:54.892 06:30:59 -- setup/acl.sh@19 -- # continue 00:04:54.892 06:30:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:54.892 06:30:59 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:54.892 06:30:59 -- setup/acl.sh@19 -- # continue 00:04:54.892 06:30:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:54.892 00:04:54.892 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:54.892 06:30:59 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:54.892 06:30:59 -- setup/acl.sh@19 -- # continue 00:04:54.892 06:30:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:54.892 06:30:59 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:54.892 06:30:59 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:54.892 06:30:59 -- setup/acl.sh@20 -- # continue 00:04:54.892 06:30:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:54.892 06:30:59 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:54.892 06:30:59 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:54.892 06:30:59 -- setup/acl.sh@20 -- # continue 00:04:54.892 06:30:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:54.892 06:30:59 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:54.892 06:30:59 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:54.892 06:30:59 -- setup/acl.sh@20 -- # continue 00:04:54.892 06:30:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:54.892 06:30:59 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:54.892 06:30:59 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:54.892 06:30:59 -- setup/acl.sh@20 -- # continue 00:04:54.892 06:30:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:54.892 06:30:59 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:54.892 06:30:59 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:54.892 06:30:59 -- setup/acl.sh@20 -- # continue 00:04:54.892 06:30:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:54.892 06:30:59 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:54.892 06:30:59 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:54.892 06:30:59 -- setup/acl.sh@20 -- # continue 00:04:54.892 06:30:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:54.892 06:30:59 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:54.892 06:30:59 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:54.892 06:30:59 -- setup/acl.sh@20 -- # continue 00:04:54.892 06:30:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:54.892 06:30:59 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:55.151 06:30:59 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.151 06:30:59 -- setup/acl.sh@20 -- # continue 00:04:55.151 06:30:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.151 06:30:59 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:55.151 06:30:59 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.151 06:30:59 -- setup/acl.sh@20 -- # continue 00:04:55.151 06:30:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.151 06:30:59 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:55.151 06:30:59 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.151 06:30:59 -- setup/acl.sh@20 -- # continue 00:04:55.151 06:30:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.151 06:30:59 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:55.151 06:30:59 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.151 06:30:59 -- setup/acl.sh@20 -- # continue 00:04:55.151 06:30:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.151 06:30:59 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:55.151 06:30:59 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.151 06:30:59 -- setup/acl.sh@20 -- # continue 00:04:55.151 06:30:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.151 06:30:59 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:55.151 06:30:59 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.151 06:30:59 -- setup/acl.sh@20 -- # continue 00:04:55.151 06:30:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.151 06:30:59 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:55.151 06:30:59 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.151 06:30:59 -- setup/acl.sh@20 -- # continue 00:04:55.151 06:30:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.151 06:30:59 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:55.151 06:30:59 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.151 06:30:59 -- setup/acl.sh@20 -- # continue 00:04:55.151 06:30:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.151 06:30:59 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:55.151 06:30:59 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.151 06:30:59 -- setup/acl.sh@20 -- # continue 00:04:55.151 06:30:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.151 06:30:59 -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:55.151 06:30:59 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:55.151 06:30:59 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:55.151 06:30:59 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:55.151 06:30:59 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:55.151 06:30:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.151 06:30:59 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:55.151 06:30:59 -- setup/acl.sh@54 -- # run_test denied denied 00:04:55.151 06:30:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:55.151 06:30:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:55.151 06:30:59 -- common/autotest_common.sh@10 -- # set +x 00:04:55.151 ************************************ 00:04:55.151 START TEST denied 00:04:55.151 ************************************ 00:04:55.151 06:30:59 -- common/autotest_common.sh@1111 -- # denied 00:04:55.151 06:30:59 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:55.151 06:30:59 -- setup/acl.sh@38 -- # setup output config 00:04:55.151 06:30:59 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:55.151 06:30:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.151 06:30:59 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:56.525 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:56.525 06:31:00 -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:56.525 06:31:00 -- setup/acl.sh@28 -- # local dev driver 00:04:56.525 06:31:00 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:56.525 06:31:00 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:56.525 06:31:00 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:56.525 06:31:00 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:56.525 06:31:00 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:56.525 06:31:00 -- setup/acl.sh@41 -- # setup reset 00:04:56.525 06:31:00 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:56.525 06:31:00 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:59.053 00:04:59.053 real 0m3.582s 00:04:59.053 user 0m1.020s 00:04:59.053 sys 0m1.709s 00:04:59.053 06:31:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:59.053 06:31:03 -- common/autotest_common.sh@10 -- # set +x 00:04:59.053 ************************************ 00:04:59.053 END TEST denied 00:04:59.053 ************************************ 00:04:59.053 06:31:03 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:59.053 06:31:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.053 06:31:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.053 06:31:03 -- common/autotest_common.sh@10 -- # set +x 00:04:59.053 ************************************ 00:04:59.053 START TEST allowed 00:04:59.053 ************************************ 00:04:59.053 06:31:03 -- common/autotest_common.sh@1111 -- # allowed 00:04:59.053 06:31:03 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:59.053 06:31:03 -- setup/acl.sh@45 -- # setup output config 00:04:59.053 06:31:03 -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:59.053 06:31:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.053 06:31:03 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:01.590 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:01.590 06:31:05 -- setup/acl.sh@47 -- # verify 00:05:01.590 06:31:05 -- setup/acl.sh@28 -- # local dev driver 00:05:01.590 06:31:05 -- setup/acl.sh@48 -- # setup reset 00:05:01.590 06:31:05 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:01.590 06:31:05 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:02.524 00:05:02.524 real 0m3.657s 00:05:02.524 user 0m0.984s 00:05:02.524 sys 0m1.558s 00:05:02.524 06:31:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:02.524 06:31:07 -- common/autotest_common.sh@10 -- # set +x 00:05:02.524 ************************************ 00:05:02.524 END TEST allowed 00:05:02.524 ************************************ 00:05:02.524 00:05:02.524 real 0m10.064s 00:05:02.524 user 0m3.094s 00:05:02.524 sys 0m5.057s 00:05:02.524 06:31:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:02.524 06:31:07 -- common/autotest_common.sh@10 -- # set +x 00:05:02.524 ************************************ 00:05:02.524 END TEST acl 00:05:02.524 ************************************ 00:05:02.524 06:31:07 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:02.524 06:31:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:02.524 06:31:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.524 06:31:07 -- common/autotest_common.sh@10 -- # set +x 00:05:02.784 ************************************ 00:05:02.784 START TEST hugepages 00:05:02.784 ************************************ 00:05:02.784 06:31:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:02.784 * Looking for test storage... 00:05:02.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:02.785 06:31:07 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:02.785 06:31:07 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:02.785 06:31:07 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:02.785 06:31:07 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:02.785 06:31:07 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:02.785 06:31:07 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:02.785 06:31:07 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:02.785 06:31:07 -- setup/common.sh@18 -- # local node= 00:05:02.785 06:31:07 -- setup/common.sh@19 -- # local var val 00:05:02.785 06:31:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.785 06:31:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.785 06:31:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.785 06:31:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.785 06:31:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.785 06:31:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.785 06:31:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 37720532 kB' 'MemAvailable: 41746648 kB' 'Buffers: 2696 kB' 'Cached: 15990224 kB' 'SwapCached: 0 kB' 'Active: 12809164 kB' 'Inactive: 3662132 kB' 'Active(anon): 12242432 kB' 'Inactive(anon): 0 kB' 'Active(file): 566732 kB' 'Inactive(file): 3662132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482184 kB' 'Mapped: 216596 kB' 'Shmem: 11764056 kB' 'KReclaimable: 441688 kB' 'Slab: 844428 kB' 'SReclaimable: 441688 kB' 'SUnreclaim: 402740 kB' 'KernelStack: 13024 kB' 'PageTables: 9440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562320 kB' 'Committed_AS: 13381664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197052 kB' 'VmallocChunk: 0 kB' 'Percpu: 40320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2563676 kB' 'DirectMap2M: 24619008 kB' 'DirectMap1G: 41943040 kB' 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.785 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.785 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # continue 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.786 06:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.786 06:31:07 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:02.786 06:31:07 -- setup/common.sh@33 -- # echo 2048 00:05:02.786 06:31:07 -- setup/common.sh@33 -- # return 0 00:05:02.786 06:31:07 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:02.786 06:31:07 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:02.786 06:31:07 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:02.786 06:31:07 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:02.786 06:31:07 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:02.786 06:31:07 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:02.786 06:31:07 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:02.786 06:31:07 -- setup/hugepages.sh@207 -- # get_nodes 00:05:02.786 06:31:07 -- setup/hugepages.sh@27 -- # local node 00:05:02.786 06:31:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.786 06:31:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:02.786 06:31:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.786 06:31:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:02.786 06:31:07 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:02.786 06:31:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:02.786 06:31:07 -- setup/hugepages.sh@208 -- # clear_hp 00:05:02.786 06:31:07 -- setup/hugepages.sh@37 -- # local node hp 00:05:02.786 06:31:07 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:02.786 06:31:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:02.786 06:31:07 -- setup/hugepages.sh@41 -- # echo 0 00:05:02.786 06:31:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:02.786 06:31:07 -- setup/hugepages.sh@41 -- # echo 0 00:05:02.786 06:31:07 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:02.786 06:31:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:02.786 06:31:07 -- setup/hugepages.sh@41 -- # echo 0 00:05:02.786 06:31:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:02.786 06:31:07 -- setup/hugepages.sh@41 -- # echo 0 00:05:02.786 06:31:07 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:02.786 06:31:07 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:02.786 06:31:07 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:02.786 06:31:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:02.786 06:31:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.786 06:31:07 -- common/autotest_common.sh@10 -- # set +x 00:05:03.046 ************************************ 00:05:03.046 START TEST default_setup 00:05:03.046 ************************************ 00:05:03.046 06:31:07 -- common/autotest_common.sh@1111 -- # default_setup 00:05:03.046 06:31:07 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:03.046 06:31:07 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:03.046 06:31:07 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:03.046 06:31:07 -- setup/hugepages.sh@51 -- # shift 00:05:03.046 06:31:07 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:03.046 06:31:07 -- setup/hugepages.sh@52 -- # local node_ids 00:05:03.046 06:31:07 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:03.046 06:31:07 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:03.046 06:31:07 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:03.046 06:31:07 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:03.046 06:31:07 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:03.046 06:31:07 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:03.046 06:31:07 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:03.046 06:31:07 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:03.046 06:31:07 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:03.046 06:31:07 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:03.046 06:31:07 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:03.046 06:31:07 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:03.046 06:31:07 -- setup/hugepages.sh@73 -- # return 0 00:05:03.046 06:31:07 -- setup/hugepages.sh@137 -- # setup output 00:05:03.046 06:31:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.046 06:31:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:04.421 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:04.421 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:04.421 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:04.421 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:04.421 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:04.421 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:04.421 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:04.421 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:04.421 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:04.421 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:04.421 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:04.421 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:04.421 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:04.421 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:04.421 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:04.421 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:05.360 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:05.360 06:31:09 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:05.360 06:31:09 -- setup/hugepages.sh@89 -- # local node 00:05:05.360 06:31:09 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:05.360 06:31:09 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:05.360 06:31:09 -- setup/hugepages.sh@92 -- # local surp 00:05:05.360 06:31:09 -- setup/hugepages.sh@93 -- # local resv 00:05:05.360 06:31:09 -- setup/hugepages.sh@94 -- # local anon 00:05:05.360 06:31:09 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:05.360 06:31:09 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:05.360 06:31:09 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:05.360 06:31:09 -- setup/common.sh@18 -- # local node= 00:05:05.360 06:31:09 -- setup/common.sh@19 -- # local var val 00:05:05.360 06:31:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:05.360 06:31:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.360 06:31:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.360 06:31:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.360 06:31:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.360 06:31:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 06:31:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39811856 kB' 'MemAvailable: 43837916 kB' 'Buffers: 2696 kB' 'Cached: 15990316 kB' 'SwapCached: 0 kB' 'Active: 12833448 kB' 'Inactive: 3662132 kB' 'Active(anon): 12266716 kB' 'Inactive(anon): 0 kB' 'Active(file): 566732 kB' 'Inactive(file): 3662132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505928 kB' 'Mapped: 217564 kB' 'Shmem: 11764148 kB' 'KReclaimable: 441632 kB' 'Slab: 843752 kB' 'SReclaimable: 441632 kB' 'SUnreclaim: 402120 kB' 'KernelStack: 13136 kB' 'PageTables: 9992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13410348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197136 kB' 'VmallocChunk: 0 kB' 'Percpu: 40320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2563676 kB' 'DirectMap2M: 24619008 kB' 'DirectMap1G: 41943040 kB' 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.360 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.360 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.361 06:31:09 -- setup/common.sh@33 -- # echo 0 00:05:05.361 06:31:09 -- setup/common.sh@33 -- # return 0 00:05:05.361 06:31:09 -- setup/hugepages.sh@97 -- # anon=0 00:05:05.361 06:31:09 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:05.361 06:31:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.361 06:31:09 -- setup/common.sh@18 -- # local node= 00:05:05.361 06:31:09 -- setup/common.sh@19 -- # local var val 00:05:05.361 06:31:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:05.361 06:31:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.361 06:31:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.361 06:31:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.361 06:31:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.361 06:31:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39815008 kB' 'MemAvailable: 43841068 kB' 'Buffers: 2696 kB' 'Cached: 15990316 kB' 'SwapCached: 0 kB' 'Active: 12832396 kB' 'Inactive: 3662132 kB' 'Active(anon): 12265664 kB' 'Inactive(anon): 0 kB' 'Active(file): 566732 kB' 'Inactive(file): 3662132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505332 kB' 'Mapped: 217688 kB' 'Shmem: 11764148 kB' 'KReclaimable: 441632 kB' 'Slab: 843824 kB' 'SReclaimable: 441632 kB' 'SUnreclaim: 402192 kB' 'KernelStack: 12784 kB' 'PageTables: 8856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13410360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197008 kB' 'VmallocChunk: 0 kB' 'Percpu: 40320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2563676 kB' 'DirectMap2M: 24619008 kB' 'DirectMap1G: 41943040 kB' 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 06:31:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.363 06:31:09 -- setup/common.sh@33 -- # echo 0 00:05:05.363 06:31:09 -- setup/common.sh@33 -- # return 0 00:05:05.363 06:31:09 -- setup/hugepages.sh@99 -- # surp=0 00:05:05.363 06:31:09 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:05.363 06:31:09 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:05.363 06:31:09 -- setup/common.sh@18 -- # local node= 00:05:05.363 06:31:09 -- setup/common.sh@19 -- # local var val 00:05:05.363 06:31:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:05.363 06:31:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.363 06:31:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.363 06:31:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.363 06:31:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.363 06:31:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39819916 kB' 'MemAvailable: 43845968 kB' 'Buffers: 2696 kB' 'Cached: 15990328 kB' 'SwapCached: 0 kB' 'Active: 12827964 kB' 'Inactive: 3662132 kB' 'Active(anon): 12261232 kB' 'Inactive(anon): 0 kB' 'Active(file): 566732 kB' 'Inactive(file): 3662132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500408 kB' 'Mapped: 217316 kB' 'Shmem: 11764160 kB' 'KReclaimable: 441624 kB' 'Slab: 843816 kB' 'SReclaimable: 441624 kB' 'SUnreclaim: 402192 kB' 'KernelStack: 12832 kB' 'PageTables: 8956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13406268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197004 kB' 'VmallocChunk: 0 kB' 'Percpu: 40320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2563676 kB' 'DirectMap2M: 24619008 kB' 'DirectMap1G: 41943040 kB' 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.363 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.364 06:31:09 -- setup/common.sh@33 -- # echo 0 00:05:05.364 06:31:09 -- setup/common.sh@33 -- # return 0 00:05:05.364 06:31:09 -- setup/hugepages.sh@100 -- # resv=0 00:05:05.364 06:31:09 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:05.364 nr_hugepages=1024 00:05:05.364 06:31:09 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:05.364 resv_hugepages=0 00:05:05.364 06:31:09 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:05.364 surplus_hugepages=0 00:05:05.364 06:31:09 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:05.364 anon_hugepages=0 00:05:05.364 06:31:09 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:05.364 06:31:09 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:05.364 06:31:09 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:05.364 06:31:09 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:05.364 06:31:09 -- setup/common.sh@18 -- # local node= 00:05:05.364 06:31:09 -- setup/common.sh@19 -- # local var val 00:05:05.364 06:31:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:05.364 06:31:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.364 06:31:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.364 06:31:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.364 06:31:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.364 06:31:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.364 06:31:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39814832 kB' 'MemAvailable: 43840884 kB' 'Buffers: 2696 kB' 'Cached: 15990348 kB' 'SwapCached: 0 kB' 'Active: 12831700 kB' 'Inactive: 3662132 kB' 'Active(anon): 12264968 kB' 'Inactive(anon): 0 kB' 'Active(file): 566732 kB' 'Inactive(file): 3662132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504484 kB' 'Mapped: 217036 kB' 'Shmem: 11764180 kB' 'KReclaimable: 441624 kB' 'Slab: 843816 kB' 'SReclaimable: 441624 kB' 'SUnreclaim: 402192 kB' 'KernelStack: 12880 kB' 'PageTables: 9216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13410388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197008 kB' 'VmallocChunk: 0 kB' 'Percpu: 40320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2563676 kB' 'DirectMap2M: 24619008 kB' 'DirectMap1G: 41943040 kB' 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.364 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.364 06:31:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.365 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.365 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.366 06:31:09 -- setup/common.sh@33 -- # echo 1024 00:05:05.366 06:31:09 -- setup/common.sh@33 -- # return 0 00:05:05.366 06:31:09 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:05.366 06:31:09 -- setup/hugepages.sh@112 -- # get_nodes 00:05:05.366 06:31:09 -- setup/hugepages.sh@27 -- # local node 00:05:05.366 06:31:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.366 06:31:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:05.366 06:31:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.366 06:31:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:05.366 06:31:09 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:05.366 06:31:09 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:05.366 06:31:09 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:05.366 06:31:09 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:05.366 06:31:09 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:05.366 06:31:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.366 06:31:09 -- setup/common.sh@18 -- # local node=0 00:05:05.366 06:31:09 -- setup/common.sh@19 -- # local var val 00:05:05.366 06:31:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:05.366 06:31:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.366 06:31:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:05.366 06:31:09 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:05.366 06:31:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.366 06:31:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 16005232 kB' 'MemUsed: 16824652 kB' 'SwapCached: 0 kB' 'Active: 9865252 kB' 'Inactive: 3496876 kB' 'Active(anon): 9481880 kB' 'Inactive(anon): 0 kB' 'Active(file): 383372 kB' 'Inactive(file): 3496876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 13057732 kB' 'Mapped: 143536 kB' 'AnonPages: 307564 kB' 'Shmem: 9177484 kB' 'KernelStack: 6872 kB' 'PageTables: 5652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 290008 kB' 'Slab: 523976 kB' 'SReclaimable: 290008 kB' 'SUnreclaim: 233968 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.366 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.366 06:31:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.367 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.367 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.367 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.367 06:31:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.367 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.367 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.367 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.367 06:31:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.367 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.367 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.367 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.367 06:31:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.367 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.367 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.367 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.367 06:31:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.367 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.367 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.367 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.367 06:31:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.367 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.367 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.367 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.367 06:31:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.367 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.367 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.367 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.367 06:31:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.367 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.367 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.367 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.367 06:31:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.367 06:31:09 -- setup/common.sh@32 -- # continue 00:05:05.367 06:31:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.367 06:31:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.367 06:31:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.367 06:31:09 -- setup/common.sh@33 -- # echo 0 00:05:05.367 06:31:09 -- setup/common.sh@33 -- # return 0 00:05:05.367 06:31:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:05.367 06:31:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:05.367 06:31:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:05.367 06:31:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:05.367 06:31:09 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:05.367 node0=1024 expecting 1024 00:05:05.367 06:31:09 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:05.367 00:05:05.367 real 0m2.464s 00:05:05.367 user 0m0.647s 00:05:05.367 sys 0m0.829s 00:05:05.367 06:31:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:05.367 06:31:09 -- common/autotest_common.sh@10 -- # set +x 00:05:05.367 ************************************ 00:05:05.367 END TEST default_setup 00:05:05.367 ************************************ 00:05:05.367 06:31:09 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:05.367 06:31:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.367 06:31:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.367 06:31:09 -- common/autotest_common.sh@10 -- # set +x 00:05:05.625 ************************************ 00:05:05.625 START TEST per_node_1G_alloc 00:05:05.625 ************************************ 00:05:05.625 06:31:09 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:05:05.625 06:31:09 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:05.625 06:31:09 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:05:05.625 06:31:09 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:05.625 06:31:09 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:05:05.625 06:31:09 -- setup/hugepages.sh@51 -- # shift 00:05:05.625 06:31:09 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:05:05.625 06:31:09 -- setup/hugepages.sh@52 -- # local node_ids 00:05:05.625 06:31:09 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:05.625 06:31:09 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:05.625 06:31:09 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:05:05.625 06:31:09 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:05:05.625 06:31:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:05.625 06:31:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:05.625 06:31:09 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:05.625 06:31:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:05.625 06:31:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:05.625 06:31:09 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:05:05.625 06:31:09 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:05.625 06:31:09 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:05.625 06:31:09 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:05.625 06:31:09 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:05.625 06:31:09 -- setup/hugepages.sh@73 -- # return 0 00:05:05.625 06:31:09 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:05.625 06:31:09 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:05:05.625 06:31:09 -- setup/hugepages.sh@146 -- # setup output 00:05:05.625 06:31:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.625 06:31:09 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:06.560 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:06.560 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:06.560 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:06.560 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:06.560 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:06.560 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:06.560 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:06.560 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:06.560 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:06.560 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:06.560 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:06.560 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:06.560 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:06.560 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:06.560 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:06.560 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:06.560 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:06.822 06:31:11 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:05:06.822 06:31:11 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:06.822 06:31:11 -- setup/hugepages.sh@89 -- # local node 00:05:06.822 06:31:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:06.822 06:31:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:06.822 06:31:11 -- setup/hugepages.sh@92 -- # local surp 00:05:06.822 06:31:11 -- setup/hugepages.sh@93 -- # local resv 00:05:06.822 06:31:11 -- setup/hugepages.sh@94 -- # local anon 00:05:06.822 06:31:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:06.822 06:31:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:06.822 06:31:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:06.822 06:31:11 -- setup/common.sh@18 -- # local node= 00:05:06.822 06:31:11 -- setup/common.sh@19 -- # local var val 00:05:06.822 06:31:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.822 06:31:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.822 06:31:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.822 06:31:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.822 06:31:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.822 06:31:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.822 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.822 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.822 06:31:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39818832 kB' 'MemAvailable: 43845060 kB' 'Buffers: 2696 kB' 'Cached: 15990404 kB' 'SwapCached: 0 kB' 'Active: 12826696 kB' 'Inactive: 3662132 kB' 'Active(anon): 12259964 kB' 'Inactive(anon): 0 kB' 'Active(file): 566732 kB' 'Inactive(file): 3662132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498900 kB' 'Mapped: 216632 kB' 'Shmem: 11764236 kB' 'KReclaimable: 441800 kB' 'Slab: 843652 kB' 'SReclaimable: 441800 kB' 'SUnreclaim: 401852 kB' 'KernelStack: 12912 kB' 'PageTables: 9264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13404892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197164 kB' 'VmallocChunk: 0 kB' 'Percpu: 40320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2563676 kB' 'DirectMap2M: 24619008 kB' 'DirectMap1G: 41943040 kB' 00:05:06.822 06:31:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.822 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.822 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.822 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.822 06:31:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.822 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.822 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.822 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.822 06:31:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.822 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.822 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.822 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.822 06:31:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.822 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.822 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.823 06:31:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.823 06:31:11 -- setup/common.sh@33 -- # echo 0 00:05:06.823 06:31:11 -- setup/common.sh@33 -- # return 0 00:05:06.823 06:31:11 -- setup/hugepages.sh@97 -- # anon=0 00:05:06.823 06:31:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:06.823 06:31:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.823 06:31:11 -- setup/common.sh@18 -- # local node= 00:05:06.823 06:31:11 -- setup/common.sh@19 -- # local var val 00:05:06.823 06:31:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.823 06:31:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.823 06:31:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.823 06:31:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.823 06:31:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.823 06:31:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.823 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39818832 kB' 'MemAvailable: 43845060 kB' 'Buffers: 2696 kB' 'Cached: 15990404 kB' 'SwapCached: 0 kB' 'Active: 12827224 kB' 'Inactive: 3662132 kB' 'Active(anon): 12260492 kB' 'Inactive(anon): 0 kB' 'Active(file): 566732 kB' 'Inactive(file): 3662132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499512 kB' 'Mapped: 216712 kB' 'Shmem: 11764236 kB' 'KReclaimable: 441800 kB' 'Slab: 843688 kB' 'SReclaimable: 441800 kB' 'SUnreclaim: 401888 kB' 'KernelStack: 12896 kB' 'PageTables: 9232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13404904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197132 kB' 'VmallocChunk: 0 kB' 'Percpu: 40320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2563676 kB' 'DirectMap2M: 24619008 kB' 'DirectMap1G: 41943040 kB' 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.824 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.824 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.825 06:31:11 -- setup/common.sh@33 -- # echo 0 00:05:06.825 06:31:11 -- setup/common.sh@33 -- # return 0 00:05:06.825 06:31:11 -- setup/hugepages.sh@99 -- # surp=0 00:05:06.825 06:31:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:06.825 06:31:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:06.825 06:31:11 -- setup/common.sh@18 -- # local node= 00:05:06.825 06:31:11 -- setup/common.sh@19 -- # local var val 00:05:06.825 06:31:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.825 06:31:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.825 06:31:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.825 06:31:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.825 06:31:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.825 06:31:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39819444 kB' 'MemAvailable: 43845672 kB' 'Buffers: 2696 kB' 'Cached: 15990420 kB' 'SwapCached: 0 kB' 'Active: 12826088 kB' 'Inactive: 3662132 kB' 'Active(anon): 12259356 kB' 'Inactive(anon): 0 kB' 'Active(file): 566732 kB' 'Inactive(file): 3662132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498308 kB' 'Mapped: 216628 kB' 'Shmem: 11764252 kB' 'KReclaimable: 441800 kB' 'Slab: 843680 kB' 'SReclaimable: 441800 kB' 'SUnreclaim: 401880 kB' 'KernelStack: 12880 kB' 'PageTables: 9120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13404920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197132 kB' 'VmallocChunk: 0 kB' 'Percpu: 40320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2563676 kB' 'DirectMap2M: 24619008 kB' 'DirectMap1G: 41943040 kB' 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.825 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.825 06:31:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.826 06:31:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.826 06:31:11 -- setup/common.sh@33 -- # echo 0 00:05:06.826 06:31:11 -- setup/common.sh@33 -- # return 0 00:05:06.826 06:31:11 -- setup/hugepages.sh@100 -- # resv=0 00:05:06.826 06:31:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:06.826 nr_hugepages=1024 00:05:06.826 06:31:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:06.826 resv_hugepages=0 00:05:06.826 06:31:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:06.826 surplus_hugepages=0 00:05:06.826 06:31:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:06.826 anon_hugepages=0 00:05:06.826 06:31:11 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:06.826 06:31:11 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:06.826 06:31:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:06.826 06:31:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:06.826 06:31:11 -- setup/common.sh@18 -- # local node= 00:05:06.826 06:31:11 -- setup/common.sh@19 -- # local var val 00:05:06.826 06:31:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.826 06:31:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.826 06:31:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.826 06:31:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.826 06:31:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.826 06:31:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.826 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39820112 kB' 'MemAvailable: 43846332 kB' 'Buffers: 2696 kB' 'Cached: 15990432 kB' 'SwapCached: 0 kB' 'Active: 12826412 kB' 'Inactive: 3662132 kB' 'Active(anon): 12259680 kB' 'Inactive(anon): 0 kB' 'Active(file): 566732 kB' 'Inactive(file): 3662132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498620 kB' 'Mapped: 216628 kB' 'Shmem: 11764264 kB' 'KReclaimable: 441792 kB' 'Slab: 843672 kB' 'SReclaimable: 441792 kB' 'SUnreclaim: 401880 kB' 'KernelStack: 12912 kB' 'PageTables: 9224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13404932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197132 kB' 'VmallocChunk: 0 kB' 'Percpu: 40320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2563676 kB' 'DirectMap2M: 24619008 kB' 'DirectMap1G: 41943040 kB' 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.827 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.827 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.828 06:31:11 -- setup/common.sh@33 -- # echo 1024 00:05:06.828 06:31:11 -- setup/common.sh@33 -- # return 0 00:05:06.828 06:31:11 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:06.828 06:31:11 -- setup/hugepages.sh@112 -- # get_nodes 00:05:06.828 06:31:11 -- setup/hugepages.sh@27 -- # local node 00:05:06.828 06:31:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:06.828 06:31:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:06.828 06:31:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:06.828 06:31:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:06.828 06:31:11 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:06.828 06:31:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:06.828 06:31:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:06.828 06:31:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:06.828 06:31:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:06.828 06:31:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.828 06:31:11 -- setup/common.sh@18 -- # local node=0 00:05:06.828 06:31:11 -- setup/common.sh@19 -- # local var val 00:05:06.828 06:31:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.828 06:31:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.828 06:31:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:06.828 06:31:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:06.828 06:31:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.828 06:31:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 17057916 kB' 'MemUsed: 15771968 kB' 'SwapCached: 0 kB' 'Active: 9864780 kB' 'Inactive: 3496876 kB' 'Active(anon): 9481408 kB' 'Inactive(anon): 0 kB' 'Active(file): 383372 kB' 'Inactive(file): 3496876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 13057804 kB' 'Mapped: 142720 kB' 'AnonPages: 306956 kB' 'Shmem: 9177556 kB' 'KernelStack: 6856 kB' 'PageTables: 5560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 290008 kB' 'Slab: 523752 kB' 'SReclaimable: 290008 kB' 'SUnreclaim: 233744 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.828 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.828 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@33 -- # echo 0 00:05:06.829 06:31:11 -- setup/common.sh@33 -- # return 0 00:05:06.829 06:31:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:06.829 06:31:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:06.829 06:31:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:06.829 06:31:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:06.829 06:31:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.829 06:31:11 -- setup/common.sh@18 -- # local node=1 00:05:06.829 06:31:11 -- setup/common.sh@19 -- # local var val 00:05:06.829 06:31:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.829 06:31:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.829 06:31:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:06.829 06:31:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:06.829 06:31:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.829 06:31:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711852 kB' 'MemFree: 22762636 kB' 'MemUsed: 4949216 kB' 'SwapCached: 0 kB' 'Active: 2961600 kB' 'Inactive: 165256 kB' 'Active(anon): 2778240 kB' 'Inactive(anon): 0 kB' 'Active(file): 183360 kB' 'Inactive(file): 165256 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2935340 kB' 'Mapped: 73908 kB' 'AnonPages: 191624 kB' 'Shmem: 2586724 kB' 'KernelStack: 6040 kB' 'PageTables: 3616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 151784 kB' 'Slab: 319920 kB' 'SReclaimable: 151784 kB' 'SUnreclaim: 168136 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.829 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.829 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # continue 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.830 06:31:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.830 06:31:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.830 06:31:11 -- setup/common.sh@33 -- # echo 0 00:05:06.830 06:31:11 -- setup/common.sh@33 -- # return 0 00:05:06.830 06:31:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:06.830 06:31:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:06.830 06:31:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:06.830 06:31:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:06.830 06:31:11 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:06.830 node0=512 expecting 512 00:05:06.830 06:31:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:06.830 06:31:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:06.830 06:31:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:06.830 06:31:11 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:06.830 node1=512 expecting 512 00:05:06.830 06:31:11 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:06.830 00:05:06.830 real 0m1.332s 00:05:06.830 user 0m0.584s 00:05:06.830 sys 0m0.705s 00:05:06.830 06:31:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:06.830 06:31:11 -- common/autotest_common.sh@10 -- # set +x 00:05:06.830 ************************************ 00:05:06.830 END TEST per_node_1G_alloc 00:05:06.830 ************************************ 00:05:06.830 06:31:11 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:06.830 06:31:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:06.830 06:31:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.830 06:31:11 -- common/autotest_common.sh@10 -- # set +x 00:05:07.088 ************************************ 00:05:07.088 START TEST even_2G_alloc 00:05:07.088 ************************************ 00:05:07.088 06:31:11 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:05:07.088 06:31:11 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:07.088 06:31:11 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:07.088 06:31:11 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:07.088 06:31:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:07.088 06:31:11 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:07.088 06:31:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:07.088 06:31:11 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:07.088 06:31:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:07.088 06:31:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:07.088 06:31:11 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:07.089 06:31:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:07.089 06:31:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:07.089 06:31:11 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:07.089 06:31:11 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:07.089 06:31:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:07.089 06:31:11 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:07.089 06:31:11 -- setup/hugepages.sh@83 -- # : 512 00:05:07.089 06:31:11 -- setup/hugepages.sh@84 -- # : 1 00:05:07.089 06:31:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:07.089 06:31:11 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:07.089 06:31:11 -- setup/hugepages.sh@83 -- # : 0 00:05:07.089 06:31:11 -- setup/hugepages.sh@84 -- # : 0 00:05:07.089 06:31:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:07.089 06:31:11 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:07.089 06:31:11 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:07.089 06:31:11 -- setup/hugepages.sh@153 -- # setup output 00:05:07.089 06:31:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.089 06:31:11 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:08.023 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:08.023 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:08.023 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:08.023 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:08.023 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:08.023 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:08.023 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:08.023 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:08.023 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:08.023 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:08.023 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:08.023 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:08.023 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:08.023 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:08.023 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:08.023 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:08.023 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:08.285 06:31:12 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:08.286 06:31:12 -- setup/hugepages.sh@89 -- # local node 00:05:08.286 06:31:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:08.286 06:31:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:08.286 06:31:12 -- setup/hugepages.sh@92 -- # local surp 00:05:08.286 06:31:12 -- setup/hugepages.sh@93 -- # local resv 00:05:08.286 06:31:12 -- setup/hugepages.sh@94 -- # local anon 00:05:08.286 06:31:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:08.286 06:31:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:08.286 06:31:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:08.286 06:31:12 -- setup/common.sh@18 -- # local node= 00:05:08.286 06:31:12 -- setup/common.sh@19 -- # local var val 00:05:08.286 06:31:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:08.286 06:31:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.286 06:31:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.286 06:31:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.286 06:31:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.286 06:31:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39813600 kB' 'MemAvailable: 43839820 kB' 'Buffers: 2696 kB' 'Cached: 15990500 kB' 'SwapCached: 0 kB' 'Active: 12827060 kB' 'Inactive: 3662132 kB' 'Active(anon): 12260328 kB' 'Inactive(anon): 0 kB' 'Active(file): 566732 kB' 'Inactive(file): 3662132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499240 kB' 'Mapped: 216772 kB' 'Shmem: 11764332 kB' 'KReclaimable: 441792 kB' 'Slab: 843732 kB' 'SReclaimable: 441792 kB' 'SUnreclaim: 401940 kB' 'KernelStack: 12944 kB' 'PageTables: 9368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13405156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197260 kB' 'VmallocChunk: 0 kB' 'Percpu: 40320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2563676 kB' 'DirectMap2M: 24619008 kB' 'DirectMap1G: 41943040 kB' 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 06:31:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.287 06:31:12 -- setup/common.sh@33 -- # echo 0 00:05:08.287 06:31:12 -- setup/common.sh@33 -- # return 0 00:05:08.287 06:31:12 -- setup/hugepages.sh@97 -- # anon=0 00:05:08.287 06:31:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:08.287 06:31:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.287 06:31:12 -- setup/common.sh@18 -- # local node= 00:05:08.287 06:31:12 -- setup/common.sh@19 -- # local var val 00:05:08.287 06:31:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:08.287 06:31:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.287 06:31:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.287 06:31:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.287 06:31:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.287 06:31:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39814064 kB' 'MemAvailable: 43840284 kB' 'Buffers: 2696 kB' 'Cached: 15990504 kB' 'SwapCached: 0 kB' 'Active: 12827168 kB' 'Inactive: 3662132 kB' 'Active(anon): 12260436 kB' 'Inactive(anon): 0 kB' 'Active(file): 566732 kB' 'Inactive(file): 3662132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499380 kB' 'Mapped: 216772 kB' 'Shmem: 11764336 kB' 'KReclaimable: 441792 kB' 'Slab: 843700 kB' 'SReclaimable: 441792 kB' 'SUnreclaim: 401908 kB' 'KernelStack: 12944 kB' 'PageTables: 9320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13405168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197228 kB' 'VmallocChunk: 0 kB' 'Percpu: 40320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2563676 kB' 'DirectMap2M: 24619008 kB' 'DirectMap1G: 41943040 kB' 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.287 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.287 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.288 06:31:12 -- setup/common.sh@33 -- # echo 0 00:05:08.288 06:31:12 -- setup/common.sh@33 -- # return 0 00:05:08.288 06:31:12 -- setup/hugepages.sh@99 -- # surp=0 00:05:08.288 06:31:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:08.288 06:31:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:08.288 06:31:12 -- setup/common.sh@18 -- # local node= 00:05:08.288 06:31:12 -- setup/common.sh@19 -- # local var val 00:05:08.288 06:31:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:08.288 06:31:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.288 06:31:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.288 06:31:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.288 06:31:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.288 06:31:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39813664 kB' 'MemAvailable: 43839884 kB' 'Buffers: 2696 kB' 'Cached: 15990512 kB' 'SwapCached: 0 kB' 'Active: 12827044 kB' 'Inactive: 3662132 kB' 'Active(anon): 12260312 kB' 'Inactive(anon): 0 kB' 'Active(file): 566732 kB' 'Inactive(file): 3662132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499224 kB' 'Mapped: 216740 kB' 'Shmem: 11764344 kB' 'KReclaimable: 441792 kB' 'Slab: 843700 kB' 'SReclaimable: 441792 kB' 'SUnreclaim: 401908 kB' 'KernelStack: 12928 kB' 'PageTables: 9232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13405184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197212 kB' 'VmallocChunk: 0 kB' 'Percpu: 40320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2563676 kB' 'DirectMap2M: 24619008 kB' 'DirectMap1G: 41943040 kB' 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 06:31:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.289 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.290 06:31:12 -- setup/common.sh@33 -- # echo 0 00:05:08.290 06:31:12 -- setup/common.sh@33 -- # return 0 00:05:08.290 06:31:12 -- setup/hugepages.sh@100 -- # resv=0 00:05:08.290 06:31:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:08.290 nr_hugepages=1024 00:05:08.290 06:31:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:08.290 resv_hugepages=0 00:05:08.290 06:31:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:08.290 surplus_hugepages=0 00:05:08.290 06:31:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:08.290 anon_hugepages=0 00:05:08.290 06:31:12 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:08.290 06:31:12 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:08.290 06:31:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:08.290 06:31:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:08.290 06:31:12 -- setup/common.sh@18 -- # local node= 00:05:08.290 06:31:12 -- setup/common.sh@19 -- # local var val 00:05:08.290 06:31:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:08.290 06:31:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.290 06:31:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.290 06:31:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.290 06:31:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.290 06:31:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 06:31:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39814320 kB' 'MemAvailable: 43840540 kB' 'Buffers: 2696 kB' 'Cached: 15990528 kB' 'SwapCached: 0 kB' 'Active: 12826872 kB' 'Inactive: 3662132 kB' 'Active(anon): 12260140 kB' 'Inactive(anon): 0 kB' 'Active(file): 566732 kB' 'Inactive(file): 3662132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499000 kB' 'Mapped: 216664 kB' 'Shmem: 11764360 kB' 'KReclaimable: 441792 kB' 'Slab: 843692 kB' 'SReclaimable: 441792 kB' 'SUnreclaim: 401900 kB' 'KernelStack: 12912 kB' 'PageTables: 9180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13405200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197212 kB' 'VmallocChunk: 0 kB' 'Percpu: 40320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2563676 kB' 'DirectMap2M: 24619008 kB' 'DirectMap1G: 41943040 kB' 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.290 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 06:31:12 -- setup/common.sh@33 -- # echo 1024 00:05:08.291 06:31:12 -- setup/common.sh@33 -- # return 0 00:05:08.291 06:31:12 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:08.291 06:31:12 -- setup/hugepages.sh@112 -- # get_nodes 00:05:08.291 06:31:12 -- setup/hugepages.sh@27 -- # local node 00:05:08.291 06:31:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:08.291 06:31:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:08.291 06:31:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:08.291 06:31:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:08.291 06:31:12 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:08.291 06:31:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:08.291 06:31:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:08.291 06:31:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:08.291 06:31:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:08.291 06:31:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.291 06:31:12 -- setup/common.sh@18 -- # local node=0 00:05:08.291 06:31:12 -- setup/common.sh@19 -- # local var val 00:05:08.291 06:31:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:08.291 06:31:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.291 06:31:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:08.291 06:31:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:08.291 06:31:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.291 06:31:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 17062080 kB' 'MemUsed: 15767804 kB' 'SwapCached: 0 kB' 'Active: 9862488 kB' 'Inactive: 3496876 kB' 'Active(anon): 9479116 kB' 'Inactive(anon): 0 kB' 'Active(file): 383372 kB' 'Inactive(file): 3496876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 13057888 kB' 'Mapped: 141760 kB' 'AnonPages: 304656 kB' 'Shmem: 9177640 kB' 'KernelStack: 6840 kB' 'PageTables: 5336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 290008 kB' 'Slab: 523764 kB' 'SReclaimable: 290008 kB' 'SUnreclaim: 233756 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 06:31:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 06:31:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 06:31:12 -- setup/common.sh@33 -- # echo 0 00:05:08.292 06:31:12 -- setup/common.sh@33 -- # return 0 00:05:08.292 06:31:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:08.292 06:31:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:08.292 06:31:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:08.292 06:31:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:08.292 06:31:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.292 06:31:12 -- setup/common.sh@18 -- # local node=1 00:05:08.292 06:31:12 -- setup/common.sh@19 -- # local var val 00:05:08.292 06:31:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:08.292 06:31:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.292 06:31:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:08.292 06:31:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:08.292 06:31:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.292 06:31:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711852 kB' 'MemFree: 22760304 kB' 'MemUsed: 4951548 kB' 'SwapCached: 0 kB' 'Active: 2958236 kB' 'Inactive: 165256 kB' 'Active(anon): 2774876 kB' 'Inactive(anon): 0 kB' 'Active(file): 183360 kB' 'Inactive(file): 165256 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2935352 kB' 'Mapped: 73844 kB' 'AnonPages: 188192 kB' 'Shmem: 2586736 kB' 'KernelStack: 5992 kB' 'PageTables: 3376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 151784 kB' 'Slab: 319880 kB' 'SReclaimable: 151784 kB' 'SUnreclaim: 168096 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # continue 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.293 06:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.293 06:31:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.293 06:31:12 -- setup/common.sh@33 -- # echo 0 00:05:08.293 06:31:12 -- setup/common.sh@33 -- # return 0 00:05:08.293 06:31:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:08.293 06:31:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:08.293 06:31:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:08.293 06:31:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:08.293 06:31:12 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:08.293 node0=512 expecting 512 00:05:08.293 06:31:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:08.293 06:31:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:08.293 06:31:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:08.293 06:31:12 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:08.293 node1=512 expecting 512 00:05:08.293 06:31:12 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:08.293 00:05:08.293 real 0m1.374s 00:05:08.293 user 0m0.587s 00:05:08.293 sys 0m0.747s 00:05:08.293 06:31:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:08.293 06:31:12 -- common/autotest_common.sh@10 -- # set +x 00:05:08.294 ************************************ 00:05:08.294 END TEST even_2G_alloc 00:05:08.294 ************************************ 00:05:08.294 06:31:12 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:08.294 06:31:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.294 06:31:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.294 06:31:12 -- common/autotest_common.sh@10 -- # set +x 00:05:08.551 ************************************ 00:05:08.551 START TEST odd_alloc 00:05:08.551 ************************************ 00:05:08.551 06:31:12 -- common/autotest_common.sh@1111 -- # odd_alloc 00:05:08.551 06:31:12 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:08.551 06:31:12 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:08.551 06:31:12 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:08.551 06:31:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:08.551 06:31:12 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:08.551 06:31:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:08.551 06:31:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:08.551 06:31:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:08.551 06:31:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:08.551 06:31:12 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:08.551 06:31:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:08.551 06:31:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:08.551 06:31:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:08.551 06:31:12 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:08.551 06:31:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:08.551 06:31:12 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:08.551 06:31:12 -- setup/hugepages.sh@83 -- # : 513 00:05:08.551 06:31:12 -- setup/hugepages.sh@84 -- # : 1 00:05:08.551 06:31:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:08.551 06:31:12 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:08.551 06:31:12 -- setup/hugepages.sh@83 -- # : 0 00:05:08.551 06:31:12 -- setup/hugepages.sh@84 -- # : 0 00:05:08.551 06:31:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:08.551 06:31:12 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:08.551 06:31:12 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:08.551 06:31:12 -- setup/hugepages.sh@160 -- # setup output 00:05:08.551 06:31:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.551 06:31:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:09.482 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:09.482 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:09.482 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:09.482 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:09.482 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:09.482 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:09.482 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:09.482 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:09.482 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:09.482 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:09.482 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:09.482 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:09.482 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:09.482 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:09.482 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:09.482 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:09.482 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:09.744 06:31:14 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:09.744 06:31:14 -- setup/hugepages.sh@89 -- # local node 00:05:09.744 06:31:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:09.744 06:31:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:09.744 06:31:14 -- setup/hugepages.sh@92 -- # local surp 00:05:09.744 06:31:14 -- setup/hugepages.sh@93 -- # local resv 00:05:09.744 06:31:14 -- setup/hugepages.sh@94 -- # local anon 00:05:09.744 06:31:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:09.744 06:31:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:09.744 06:31:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:09.744 06:31:14 -- setup/common.sh@18 -- # local node= 00:05:09.744 06:31:14 -- setup/common.sh@19 -- # local var val 00:05:09.744 06:31:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.744 06:31:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.744 06:31:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.744 06:31:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.744 06:31:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.744 06:31:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.744 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.744 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39807152 kB' 'MemAvailable: 43833348 kB' 'Buffers: 2696 kB' 'Cached: 15990600 kB' 'SwapCached: 0 kB' 'Active: 12820920 kB' 'Inactive: 3662132 kB' 'Active(anon): 12254188 kB' 'Inactive(anon): 0 kB' 'Active(file): 566732 kB' 'Inactive(file): 3662132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493048 kB' 'Mapped: 215616 kB' 'Shmem: 11764432 kB' 'KReclaimable: 441768 kB' 'Slab: 843320 kB' 'SReclaimable: 441768 kB' 'SUnreclaim: 401552 kB' 'KernelStack: 12960 kB' 'PageTables: 8700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609872 kB' 'Committed_AS: 13378008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197164 kB' 'VmallocChunk: 0 kB' 'Percpu: 40320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2563676 kB' 'DirectMap2M: 24619008 kB' 'DirectMap1G: 41943040 kB' 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.745 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.745 06:31:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.745 06:31:14 -- setup/common.sh@33 -- # echo 0 00:05:09.745 06:31:14 -- setup/common.sh@33 -- # return 0 00:05:09.745 06:31:14 -- setup/hugepages.sh@97 -- # anon=0 00:05:09.746 06:31:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:09.746 06:31:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.746 06:31:14 -- setup/common.sh@18 -- # local node= 00:05:09.746 06:31:14 -- setup/common.sh@19 -- # local var val 00:05:09.746 06:31:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.746 06:31:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.746 06:31:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.746 06:31:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.746 06:31:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.746 06:31:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39809476 kB' 'MemAvailable: 43835672 kB' 'Buffers: 2696 kB' 'Cached: 15990600 kB' 'SwapCached: 0 kB' 'Active: 12821848 kB' 'Inactive: 3662132 kB' 'Active(anon): 12255116 kB' 'Inactive(anon): 0 kB' 'Active(file): 566732 kB' 'Inactive(file): 3662132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493496 kB' 'Mapped: 215676 kB' 'Shmem: 11764432 kB' 'KReclaimable: 441768 kB' 'Slab: 843352 kB' 'SReclaimable: 441768 kB' 'SUnreclaim: 401584 kB' 'KernelStack: 13152 kB' 'PageTables: 9044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609872 kB' 'Committed_AS: 13379396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197292 kB' 'VmallocChunk: 0 kB' 'Percpu: 40320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2563676 kB' 'DirectMap2M: 24619008 kB' 'DirectMap1G: 41943040 kB' 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.746 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.746 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.747 06:31:14 -- setup/common.sh@33 -- # echo 0 00:05:09.747 06:31:14 -- setup/common.sh@33 -- # return 0 00:05:09.747 06:31:14 -- setup/hugepages.sh@99 -- # surp=0 00:05:09.747 06:31:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:09.747 06:31:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:09.747 06:31:14 -- setup/common.sh@18 -- # local node= 00:05:09.747 06:31:14 -- setup/common.sh@19 -- # local var val 00:05:09.747 06:31:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.747 06:31:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.747 06:31:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.747 06:31:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.747 06:31:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.747 06:31:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39807784 kB' 'MemAvailable: 43833980 kB' 'Buffers: 2696 kB' 'Cached: 15990612 kB' 'SwapCached: 0 kB' 'Active: 12821784 kB' 'Inactive: 3662132 kB' 'Active(anon): 12255052 kB' 'Inactive(anon): 0 kB' 'Active(file): 566732 kB' 'Inactive(file): 3662132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493828 kB' 'Mapped: 215684 kB' 'Shmem: 11764444 kB' 'KReclaimable: 441768 kB' 'Slab: 843348 kB' 'SReclaimable: 441768 kB' 'SUnreclaim: 401580 kB' 'KernelStack: 12928 kB' 'PageTables: 9672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609872 kB' 'Committed_AS: 13379408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197340 kB' 'VmallocChunk: 0 kB' 'Percpu: 40320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2563676 kB' 'DirectMap2M: 24619008 kB' 'DirectMap1G: 41943040 kB' 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.747 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.747 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.748 06:31:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.748 06:31:14 -- setup/common.sh@33 -- # echo 0 00:05:09.748 06:31:14 -- setup/common.sh@33 -- # return 0 00:05:09.748 06:31:14 -- setup/hugepages.sh@100 -- # resv=0 00:05:09.748 06:31:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:09.748 nr_hugepages=1025 00:05:09.748 06:31:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:09.748 resv_hugepages=0 00:05:09.748 06:31:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:09.748 surplus_hugepages=0 00:05:09.748 06:31:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:09.748 anon_hugepages=0 00:05:09.748 06:31:14 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:09.748 06:31:14 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:09.748 06:31:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:09.748 06:31:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:09.748 06:31:14 -- setup/common.sh@18 -- # local node= 00:05:09.748 06:31:14 -- setup/common.sh@19 -- # local var val 00:05:09.748 06:31:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.748 06:31:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.748 06:31:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.748 06:31:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.748 06:31:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.748 06:31:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.748 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.749 06:31:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39806984 kB' 'MemAvailable: 43833180 kB' 'Buffers: 2696 kB' 'Cached: 15990632 kB' 'SwapCached: 0 kB' 'Active: 12821200 kB' 'Inactive: 3662132 kB' 'Active(anon): 12254468 kB' 'Inactive(anon): 0 kB' 'Active(file): 566732 kB' 'Inactive(file): 3662132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493300 kB' 'Mapped: 215620 kB' 'Shmem: 11764464 kB' 'KReclaimable: 441768 kB' 'Slab: 843360 kB' 'SReclaimable: 441768 kB' 'SUnreclaim: 401592 kB' 'KernelStack: 13200 kB' 'PageTables: 9488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609872 kB' 'Committed_AS: 13378048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197180 kB' 'VmallocChunk: 0 kB' 'Percpu: 40320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2563676 kB' 'DirectMap2M: 24619008 kB' 'DirectMap1G: 41943040 kB' 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.749 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.749 06:31:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.750 06:31:14 -- setup/common.sh@33 -- # echo 1025 00:05:09.750 06:31:14 -- setup/common.sh@33 -- # return 0 00:05:09.750 06:31:14 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:09.750 06:31:14 -- setup/hugepages.sh@112 -- # get_nodes 00:05:09.750 06:31:14 -- setup/hugepages.sh@27 -- # local node 00:05:09.750 06:31:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.750 06:31:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:09.750 06:31:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.750 06:31:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:09.750 06:31:14 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:09.750 06:31:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:09.750 06:31:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:09.750 06:31:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:09.750 06:31:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:09.750 06:31:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.750 06:31:14 -- setup/common.sh@18 -- # local node=0 00:05:09.750 06:31:14 -- setup/common.sh@19 -- # local var val 00:05:09.750 06:31:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.750 06:31:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.750 06:31:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:09.750 06:31:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:09.750 06:31:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.750 06:31:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 17061664 kB' 'MemUsed: 15768220 kB' 'SwapCached: 0 kB' 'Active: 9862092 kB' 'Inactive: 3496876 kB' 'Active(anon): 9478720 kB' 'Inactive(anon): 0 kB' 'Active(file): 383372 kB' 'Inactive(file): 3496876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 13057972 kB' 'Mapped: 141792 kB' 'AnonPages: 304268 kB' 'Shmem: 9177724 kB' 'KernelStack: 6792 kB' 'PageTables: 5116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 290008 kB' 'Slab: 523512 kB' 'SReclaimable: 290008 kB' 'SUnreclaim: 233504 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.750 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.750 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@33 -- # echo 0 00:05:09.751 06:31:14 -- setup/common.sh@33 -- # return 0 00:05:09.751 06:31:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:09.751 06:31:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:09.751 06:31:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:09.751 06:31:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:09.751 06:31:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.751 06:31:14 -- setup/common.sh@18 -- # local node=1 00:05:09.751 06:31:14 -- setup/common.sh@19 -- # local var val 00:05:09.751 06:31:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.751 06:31:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.751 06:31:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:09.751 06:31:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:09.751 06:31:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.751 06:31:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711852 kB' 'MemFree: 22746644 kB' 'MemUsed: 4965208 kB' 'SwapCached: 0 kB' 'Active: 2958124 kB' 'Inactive: 165256 kB' 'Active(anon): 2774764 kB' 'Inactive(anon): 0 kB' 'Active(file): 183360 kB' 'Inactive(file): 165256 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2935368 kB' 'Mapped: 73872 kB' 'AnonPages: 188008 kB' 'Shmem: 2586752 kB' 'KernelStack: 5944 kB' 'PageTables: 2980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 151760 kB' 'Slab: 319844 kB' 'SReclaimable: 151760 kB' 'SUnreclaim: 168084 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.751 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.751 06:31:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # continue 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.752 06:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.752 06:31:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.752 06:31:14 -- setup/common.sh@33 -- # echo 0 00:05:09.752 06:31:14 -- setup/common.sh@33 -- # return 0 00:05:09.752 06:31:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:09.752 06:31:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:09.752 06:31:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:09.752 06:31:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:09.752 06:31:14 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:09.752 node0=512 expecting 513 00:05:09.752 06:31:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:09.752 06:31:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:09.752 06:31:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:09.752 06:31:14 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:09.752 node1=513 expecting 512 00:05:09.752 06:31:14 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:09.752 00:05:09.752 real 0m1.382s 00:05:09.752 user 0m0.578s 00:05:09.752 sys 0m0.765s 00:05:09.752 06:31:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:09.752 06:31:14 -- common/autotest_common.sh@10 -- # set +x 00:05:09.752 ************************************ 00:05:09.752 END TEST odd_alloc 00:05:09.752 ************************************ 00:05:09.752 06:31:14 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:09.752 06:31:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.752 06:31:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.752 06:31:14 -- common/autotest_common.sh@10 -- # set +x 00:05:10.010 ************************************ 00:05:10.010 START TEST custom_alloc 00:05:10.010 ************************************ 00:05:10.010 06:31:14 -- common/autotest_common.sh@1111 -- # custom_alloc 00:05:10.010 06:31:14 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:10.010 06:31:14 -- setup/hugepages.sh@169 -- # local node 00:05:10.010 06:31:14 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:10.010 06:31:14 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:10.010 06:31:14 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:10.010 06:31:14 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:10.010 06:31:14 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:10.010 06:31:14 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:10.010 06:31:14 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:10.010 06:31:14 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:10.010 06:31:14 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:10.010 06:31:14 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:10.010 06:31:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:10.010 06:31:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:10.010 06:31:14 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:10.010 06:31:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:10.010 06:31:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:10.010 06:31:14 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:10.010 06:31:14 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:10.010 06:31:14 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:10.010 06:31:14 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:10.010 06:31:14 -- setup/hugepages.sh@83 -- # : 256 00:05:10.010 06:31:14 -- setup/hugepages.sh@84 -- # : 1 00:05:10.010 06:31:14 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:10.010 06:31:14 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:10.010 06:31:14 -- setup/hugepages.sh@83 -- # : 0 00:05:10.010 06:31:14 -- setup/hugepages.sh@84 -- # : 0 00:05:10.010 06:31:14 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:10.010 06:31:14 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:10.010 06:31:14 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:10.010 06:31:14 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:10.010 06:31:14 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:10.010 06:31:14 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:10.010 06:31:14 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:10.010 06:31:14 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:10.010 06:31:14 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:10.010 06:31:14 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:10.010 06:31:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:10.010 06:31:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:10.010 06:31:14 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:10.010 06:31:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:10.010 06:31:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:10.010 06:31:14 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:10.010 06:31:14 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:10.010 06:31:14 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:10.010 06:31:14 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:10.011 06:31:14 -- setup/hugepages.sh@78 -- # return 0 00:05:10.011 06:31:14 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:10.011 06:31:14 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:10.011 06:31:14 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:10.011 06:31:14 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:10.011 06:31:14 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:10.011 06:31:14 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:10.011 06:31:14 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:10.011 06:31:14 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:10.011 06:31:14 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:10.011 06:31:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:10.011 06:31:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:10.011 06:31:14 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:10.011 06:31:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:10.011 06:31:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:10.011 06:31:14 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:10.011 06:31:14 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:10.011 06:31:14 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:10.011 06:31:14 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:10.011 06:31:14 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:10.011 06:31:14 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:10.011 06:31:14 -- setup/hugepages.sh@78 -- # return 0 00:05:10.011 06:31:14 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:10.011 06:31:14 -- setup/hugepages.sh@187 -- # setup output 00:05:10.011 06:31:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.011 06:31:14 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:10.943 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:10.943 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:10.943 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:10.943 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:11.201 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:11.201 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:11.201 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:11.201 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:11.201 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:11.201 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:11.201 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:11.201 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:11.201 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:11.201 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:11.201 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:11.201 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:11.201 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:11.201 06:31:15 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:11.201 06:31:15 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:11.201 06:31:15 -- setup/hugepages.sh@89 -- # local node 00:05:11.201 06:31:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:11.201 06:31:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:11.201 06:31:15 -- setup/hugepages.sh@92 -- # local surp 00:05:11.201 06:31:15 -- setup/hugepages.sh@93 -- # local resv 00:05:11.201 06:31:15 -- setup/hugepages.sh@94 -- # local anon 00:05:11.201 06:31:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:11.201 06:31:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:11.201 06:31:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:11.201 06:31:15 -- setup/common.sh@18 -- # local node= 00:05:11.201 06:31:15 -- setup/common.sh@19 -- # local var val 00:05:11.201 06:31:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.201 06:31:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.201 06:31:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.201 06:31:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.201 06:31:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.201 06:31:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.201 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.201 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 38747224 kB' 'MemAvailable: 42773420 kB' 'Buffers: 2696 kB' 'Cached: 15990696 kB' 'SwapCached: 0 kB' 'Active: 12820652 kB' 'Inactive: 3662132 kB' 'Active(anon): 12253920 kB' 'Inactive(anon): 0 kB' 'Active(file): 566732 kB' 'Inactive(file): 3662132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492696 kB' 'Mapped: 215708 kB' 'Shmem: 11764528 kB' 'KReclaimable: 441768 kB' 'Slab: 843364 kB' 'SReclaimable: 441768 kB' 'SUnreclaim: 401596 kB' 'KernelStack: 12880 kB' 'PageTables: 8648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086608 kB' 'Committed_AS: 13377220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197148 kB' 'VmallocChunk: 0 kB' 'Percpu: 40320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2563676 kB' 'DirectMap2M: 24619008 kB' 'DirectMap1G: 41943040 kB' 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.202 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.202 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.203 06:31:15 -- setup/common.sh@33 -- # echo 0 00:05:11.203 06:31:15 -- setup/common.sh@33 -- # return 0 00:05:11.203 06:31:15 -- setup/hugepages.sh@97 -- # anon=0 00:05:11.203 06:31:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:11.203 06:31:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.203 06:31:15 -- setup/common.sh@18 -- # local node= 00:05:11.203 06:31:15 -- setup/common.sh@19 -- # local var val 00:05:11.203 06:31:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.203 06:31:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.203 06:31:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.203 06:31:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.203 06:31:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.203 06:31:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 38754956 kB' 'MemAvailable: 42781152 kB' 'Buffers: 2696 kB' 'Cached: 15990700 kB' 'SwapCached: 0 kB' 'Active: 12820268 kB' 'Inactive: 3662132 kB' 'Active(anon): 12253536 kB' 'Inactive(anon): 0 kB' 'Active(file): 566732 kB' 'Inactive(file): 3662132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492316 kB' 'Mapped: 215652 kB' 'Shmem: 11764532 kB' 'KReclaimable: 441768 kB' 'Slab: 843420 kB' 'SReclaimable: 441768 kB' 'SUnreclaim: 401652 kB' 'KernelStack: 12864 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086608 kB' 'Committed_AS: 13377232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197132 kB' 'VmallocChunk: 0 kB' 'Percpu: 40320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2563676 kB' 'DirectMap2M: 24619008 kB' 'DirectMap1G: 41943040 kB' 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.204 06:31:15 -- setup/common.sh@33 -- # echo 0 00:05:11.204 06:31:15 -- setup/common.sh@33 -- # return 0 00:05:11.204 06:31:15 -- setup/hugepages.sh@99 -- # surp=0 00:05:11.204 06:31:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:11.204 06:31:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:11.204 06:31:15 -- setup/common.sh@18 -- # local node= 00:05:11.204 06:31:15 -- setup/common.sh@19 -- # local var val 00:05:11.204 06:31:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.204 06:31:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.204 06:31:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.204 06:31:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.204 06:31:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.204 06:31:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 38755248 kB' 'MemAvailable: 42781444 kB' 'Buffers: 2696 kB' 'Cached: 15990712 kB' 'SwapCached: 0 kB' 'Active: 12820056 kB' 'Inactive: 3662132 kB' 'Active(anon): 12253324 kB' 'Inactive(anon): 0 kB' 'Active(file): 566732 kB' 'Inactive(file): 3662132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492064 kB' 'Mapped: 215652 kB' 'Shmem: 11764544 kB' 'KReclaimable: 441768 kB' 'Slab: 843400 kB' 'SReclaimable: 441768 kB' 'SUnreclaim: 401632 kB' 'KernelStack: 12848 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086608 kB' 'Committed_AS: 13377248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197148 kB' 'VmallocChunk: 0 kB' 'Percpu: 40320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2563676 kB' 'DirectMap2M: 24619008 kB' 'DirectMap1G: 41943040 kB' 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.204 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.204 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.205 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.205 06:31:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.205 06:31:15 -- setup/common.sh@33 -- # echo 0 00:05:11.205 06:31:15 -- setup/common.sh@33 -- # return 0 00:05:11.205 06:31:15 -- setup/hugepages.sh@100 -- # resv=0 00:05:11.205 06:31:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:11.205 nr_hugepages=1536 00:05:11.205 06:31:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:11.205 resv_hugepages=0 00:05:11.205 06:31:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:11.205 surplus_hugepages=0 00:05:11.206 06:31:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:11.206 anon_hugepages=0 00:05:11.206 06:31:15 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:11.206 06:31:15 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:11.206 06:31:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:11.206 06:31:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:11.206 06:31:15 -- setup/common.sh@18 -- # local node= 00:05:11.206 06:31:15 -- setup/common.sh@19 -- # local var val 00:05:11.206 06:31:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.206 06:31:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.206 06:31:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.206 06:31:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.206 06:31:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.206 06:31:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.206 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.206 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.206 06:31:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 38755312 kB' 'MemAvailable: 42781508 kB' 'Buffers: 2696 kB' 'Cached: 15990724 kB' 'SwapCached: 0 kB' 'Active: 12820264 kB' 'Inactive: 3662132 kB' 'Active(anon): 12253532 kB' 'Inactive(anon): 0 kB' 'Active(file): 566732 kB' 'Inactive(file): 3662132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492280 kB' 'Mapped: 215652 kB' 'Shmem: 11764556 kB' 'KReclaimable: 441768 kB' 'Slab: 843400 kB' 'SReclaimable: 441768 kB' 'SUnreclaim: 401632 kB' 'KernelStack: 12848 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086608 kB' 'Committed_AS: 13377264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197148 kB' 'VmallocChunk: 0 kB' 'Percpu: 40320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2563676 kB' 'DirectMap2M: 24619008 kB' 'DirectMap1G: 41943040 kB' 00:05:11.206 06:31:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.206 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.206 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.206 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.206 06:31:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.206 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.206 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.206 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.206 06:31:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.206 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.206 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.206 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.206 06:31:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.206 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.206 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.206 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.206 06:31:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.206 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.206 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.206 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.206 06:31:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.206 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.206 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.206 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.206 06:31:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.206 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.206 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.206 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.206 06:31:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.206 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.206 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.206 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.206 06:31:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.206 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.206 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.206 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.206 06:31:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.206 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.206 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.206 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.206 06:31:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.206 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.206 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.206 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.206 06:31:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.206 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.206 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.206 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.206 06:31:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.206 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.206 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.206 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.206 06:31:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.206 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.206 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.206 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.465 06:31:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.465 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.465 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.465 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.465 06:31:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.465 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.465 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.465 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.465 06:31:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.465 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.465 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.465 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.466 06:31:15 -- setup/common.sh@33 -- # echo 1536 00:05:11.466 06:31:15 -- setup/common.sh@33 -- # return 0 00:05:11.466 06:31:15 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:11.466 06:31:15 -- setup/hugepages.sh@112 -- # get_nodes 00:05:11.466 06:31:15 -- setup/hugepages.sh@27 -- # local node 00:05:11.466 06:31:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:11.466 06:31:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:11.466 06:31:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:11.466 06:31:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:11.466 06:31:15 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:11.466 06:31:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:11.466 06:31:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:11.466 06:31:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:11.466 06:31:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:11.466 06:31:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.466 06:31:15 -- setup/common.sh@18 -- # local node=0 00:05:11.466 06:31:15 -- setup/common.sh@19 -- # local var val 00:05:11.466 06:31:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.466 06:31:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.466 06:31:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:11.466 06:31:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:11.466 06:31:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.466 06:31:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.466 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.466 06:31:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 17057972 kB' 'MemUsed: 15771912 kB' 'SwapCached: 0 kB' 'Active: 9862732 kB' 'Inactive: 3496876 kB' 'Active(anon): 9479360 kB' 'Inactive(anon): 0 kB' 'Active(file): 383372 kB' 'Inactive(file): 3496876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 13058056 kB' 'Mapped: 141760 kB' 'AnonPages: 304728 kB' 'Shmem: 9177808 kB' 'KernelStack: 6840 kB' 'PageTables: 5256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 290008 kB' 'Slab: 523444 kB' 'SReclaimable: 290008 kB' 'SUnreclaim: 233436 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.467 06:31:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.467 06:31:15 -- setup/common.sh@33 -- # echo 0 00:05:11.467 06:31:15 -- setup/common.sh@33 -- # return 0 00:05:11.467 06:31:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:11.467 06:31:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:11.467 06:31:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:11.467 06:31:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:11.467 06:31:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.467 06:31:15 -- setup/common.sh@18 -- # local node=1 00:05:11.467 06:31:15 -- setup/common.sh@19 -- # local var val 00:05:11.467 06:31:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.467 06:31:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.467 06:31:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:11.467 06:31:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:11.467 06:31:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.467 06:31:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.467 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711852 kB' 'MemFree: 21698180 kB' 'MemUsed: 6013672 kB' 'SwapCached: 0 kB' 'Active: 2957552 kB' 'Inactive: 165256 kB' 'Active(anon): 2774192 kB' 'Inactive(anon): 0 kB' 'Active(file): 183360 kB' 'Inactive(file): 165256 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2935380 kB' 'Mapped: 73892 kB' 'AnonPages: 187544 kB' 'Shmem: 2586764 kB' 'KernelStack: 6008 kB' 'PageTables: 3264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 151760 kB' 'Slab: 319956 kB' 'SReclaimable: 151760 kB' 'SUnreclaim: 168196 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # continue 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.468 06:31:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.468 06:31:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.468 06:31:15 -- setup/common.sh@33 -- # echo 0 00:05:11.468 06:31:15 -- setup/common.sh@33 -- # return 0 00:05:11.468 06:31:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:11.468 06:31:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:11.468 06:31:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:11.468 06:31:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:11.468 06:31:15 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:11.468 node0=512 expecting 512 00:05:11.468 06:31:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:11.469 06:31:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:11.469 06:31:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:11.469 06:31:15 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:11.469 node1=1024 expecting 1024 00:05:11.469 06:31:15 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:11.469 00:05:11.469 real 0m1.434s 00:05:11.469 user 0m0.646s 00:05:11.469 sys 0m0.750s 00:05:11.469 06:31:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:11.469 06:31:15 -- common/autotest_common.sh@10 -- # set +x 00:05:11.469 ************************************ 00:05:11.469 END TEST custom_alloc 00:05:11.469 ************************************ 00:05:11.469 06:31:15 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:11.469 06:31:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.469 06:31:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.469 06:31:15 -- common/autotest_common.sh@10 -- # set +x 00:05:11.469 ************************************ 00:05:11.469 START TEST no_shrink_alloc 00:05:11.469 ************************************ 00:05:11.469 06:31:15 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:05:11.469 06:31:15 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:11.469 06:31:15 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:11.469 06:31:15 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:11.469 06:31:15 -- setup/hugepages.sh@51 -- # shift 00:05:11.469 06:31:15 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:11.469 06:31:15 -- setup/hugepages.sh@52 -- # local node_ids 00:05:11.469 06:31:15 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:11.469 06:31:15 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:11.469 06:31:15 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:11.469 06:31:15 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:11.469 06:31:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:11.469 06:31:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:11.469 06:31:15 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:11.469 06:31:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:11.469 06:31:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:11.469 06:31:15 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:11.469 06:31:15 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:11.469 06:31:15 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:11.469 06:31:15 -- setup/hugepages.sh@73 -- # return 0 00:05:11.469 06:31:15 -- setup/hugepages.sh@198 -- # setup output 00:05:11.469 06:31:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.469 06:31:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:12.844 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:12.844 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:12.844 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:12.844 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:12.844 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:12.844 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:12.844 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:12.844 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:12.844 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:12.844 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:12.844 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:12.844 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:12.844 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:12.844 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:12.844 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:12.844 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:12.844 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:12.844 06:31:17 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:12.844 06:31:17 -- setup/hugepages.sh@89 -- # local node 00:05:12.844 06:31:17 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:12.844 06:31:17 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:12.844 06:31:17 -- setup/hugepages.sh@92 -- # local surp 00:05:12.844 06:31:17 -- setup/hugepages.sh@93 -- # local resv 00:05:12.844 06:31:17 -- setup/hugepages.sh@94 -- # local anon 00:05:12.844 06:31:17 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:12.844 06:31:17 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:12.844 06:31:17 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:12.844 06:31:17 -- setup/common.sh@18 -- # local node= 00:05:12.844 06:31:17 -- setup/common.sh@19 -- # local var val 00:05:12.844 06:31:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.844 06:31:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.844 06:31:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.844 06:31:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.844 06:31:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.844 06:31:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.844 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.844 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39792064 kB' 'MemAvailable: 43818220 kB' 'Buffers: 2696 kB' 'Cached: 15990792 kB' 'SwapCached: 0 kB' 'Active: 12820356 kB' 'Inactive: 3662132 kB' 'Active(anon): 12253624 kB' 'Inactive(anon): 0 kB' 'Active(file): 566732 kB' 'Inactive(file): 3662132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492148 kB' 'Mapped: 215728 kB' 'Shmem: 11764624 kB' 'KReclaimable: 441728 kB' 'Slab: 843428 kB' 'SReclaimable: 441728 kB' 'SUnreclaim: 401700 kB' 'KernelStack: 12768 kB' 'PageTables: 8248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13377080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197180 kB' 'VmallocChunk: 0 kB' 'Percpu: 40320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2563676 kB' 'DirectMap2M: 24619008 kB' 'DirectMap1G: 41943040 kB' 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.845 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.845 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.846 06:31:17 -- setup/common.sh@33 -- # echo 0 00:05:12.846 06:31:17 -- setup/common.sh@33 -- # return 0 00:05:12.846 06:31:17 -- setup/hugepages.sh@97 -- # anon=0 00:05:12.846 06:31:17 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:12.846 06:31:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.846 06:31:17 -- setup/common.sh@18 -- # local node= 00:05:12.846 06:31:17 -- setup/common.sh@19 -- # local var val 00:05:12.846 06:31:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.846 06:31:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.846 06:31:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.846 06:31:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.846 06:31:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.846 06:31:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39794244 kB' 'MemAvailable: 43820400 kB' 'Buffers: 2696 kB' 'Cached: 15990800 kB' 'SwapCached: 0 kB' 'Active: 12821304 kB' 'Inactive: 3662132 kB' 'Active(anon): 12254572 kB' 'Inactive(anon): 0 kB' 'Active(file): 566732 kB' 'Inactive(file): 3662132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493160 kB' 'Mapped: 215728 kB' 'Shmem: 11764632 kB' 'KReclaimable: 441728 kB' 'Slab: 843368 kB' 'SReclaimable: 441728 kB' 'SUnreclaim: 401640 kB' 'KernelStack: 12816 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13377460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197180 kB' 'VmallocChunk: 0 kB' 'Percpu: 40320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2563676 kB' 'DirectMap2M: 24619008 kB' 'DirectMap1G: 41943040 kB' 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.846 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.846 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.847 06:31:17 -- setup/common.sh@33 -- # echo 0 00:05:12.847 06:31:17 -- setup/common.sh@33 -- # return 0 00:05:12.847 06:31:17 -- setup/hugepages.sh@99 -- # surp=0 00:05:12.847 06:31:17 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:12.847 06:31:17 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:12.847 06:31:17 -- setup/common.sh@18 -- # local node= 00:05:12.847 06:31:17 -- setup/common.sh@19 -- # local var val 00:05:12.847 06:31:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.847 06:31:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.847 06:31:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.847 06:31:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.847 06:31:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.847 06:31:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39795288 kB' 'MemAvailable: 43821444 kB' 'Buffers: 2696 kB' 'Cached: 15990808 kB' 'SwapCached: 0 kB' 'Active: 12820136 kB' 'Inactive: 3662132 kB' 'Active(anon): 12253404 kB' 'Inactive(anon): 0 kB' 'Active(file): 566732 kB' 'Inactive(file): 3662132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491984 kB' 'Mapped: 215688 kB' 'Shmem: 11764640 kB' 'KReclaimable: 441728 kB' 'Slab: 843444 kB' 'SReclaimable: 441728 kB' 'SUnreclaim: 401716 kB' 'KernelStack: 12816 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13377472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197180 kB' 'VmallocChunk: 0 kB' 'Percpu: 40320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2563676 kB' 'DirectMap2M: 24619008 kB' 'DirectMap1G: 41943040 kB' 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.847 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.847 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.848 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.848 06:31:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.848 06:31:17 -- setup/common.sh@33 -- # echo 0 00:05:12.848 06:31:17 -- setup/common.sh@33 -- # return 0 00:05:12.848 06:31:17 -- setup/hugepages.sh@100 -- # resv=0 00:05:12.848 06:31:17 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:12.848 nr_hugepages=1024 00:05:12.848 06:31:17 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:12.848 resv_hugepages=0 00:05:12.848 06:31:17 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:12.848 surplus_hugepages=0 00:05:12.848 06:31:17 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:12.848 anon_hugepages=0 00:05:12.848 06:31:17 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:12.849 06:31:17 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:12.849 06:31:17 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:12.849 06:31:17 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:12.849 06:31:17 -- setup/common.sh@18 -- # local node= 00:05:12.849 06:31:17 -- setup/common.sh@19 -- # local var val 00:05:12.849 06:31:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.849 06:31:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.849 06:31:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.849 06:31:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.849 06:31:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.849 06:31:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.849 06:31:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39795896 kB' 'MemAvailable: 43822052 kB' 'Buffers: 2696 kB' 'Cached: 15990828 kB' 'SwapCached: 0 kB' 'Active: 12820320 kB' 'Inactive: 3662132 kB' 'Active(anon): 12253588 kB' 'Inactive(anon): 0 kB' 'Active(file): 566732 kB' 'Inactive(file): 3662132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492188 kB' 'Mapped: 215688 kB' 'Shmem: 11764660 kB' 'KReclaimable: 441728 kB' 'Slab: 843444 kB' 'SReclaimable: 441728 kB' 'SUnreclaim: 401716 kB' 'KernelStack: 12864 kB' 'PageTables: 8460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13377488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197180 kB' 'VmallocChunk: 0 kB' 'Percpu: 40320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2563676 kB' 'DirectMap2M: 24619008 kB' 'DirectMap1G: 41943040 kB' 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.849 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.849 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.850 06:31:17 -- setup/common.sh@33 -- # echo 1024 00:05:12.850 06:31:17 -- setup/common.sh@33 -- # return 0 00:05:12.850 06:31:17 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:12.850 06:31:17 -- setup/hugepages.sh@112 -- # get_nodes 00:05:12.850 06:31:17 -- setup/hugepages.sh@27 -- # local node 00:05:12.850 06:31:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.850 06:31:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:12.850 06:31:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.850 06:31:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:12.850 06:31:17 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:12.850 06:31:17 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:12.850 06:31:17 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:12.850 06:31:17 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:12.850 06:31:17 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:12.850 06:31:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.850 06:31:17 -- setup/common.sh@18 -- # local node=0 00:05:12.850 06:31:17 -- setup/common.sh@19 -- # local var val 00:05:12.850 06:31:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.850 06:31:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.850 06:31:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:12.850 06:31:17 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:12.850 06:31:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.850 06:31:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.850 06:31:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 16009224 kB' 'MemUsed: 16820660 kB' 'SwapCached: 0 kB' 'Active: 9862728 kB' 'Inactive: 3496876 kB' 'Active(anon): 9479356 kB' 'Inactive(anon): 0 kB' 'Active(file): 383372 kB' 'Inactive(file): 3496876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 13058100 kB' 'Mapped: 141760 kB' 'AnonPages: 304696 kB' 'Shmem: 9177852 kB' 'KernelStack: 6872 kB' 'PageTables: 5244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 290008 kB' 'Slab: 523568 kB' 'SReclaimable: 290008 kB' 'SUnreclaim: 233560 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.850 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.850 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # continue 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.851 06:31:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.851 06:31:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.851 06:31:17 -- setup/common.sh@33 -- # echo 0 00:05:12.851 06:31:17 -- setup/common.sh@33 -- # return 0 00:05:12.851 06:31:17 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:12.851 06:31:17 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:12.851 06:31:17 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:12.851 06:31:17 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:12.851 06:31:17 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:12.851 node0=1024 expecting 1024 00:05:12.851 06:31:17 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:12.851 06:31:17 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:12.851 06:31:17 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:12.851 06:31:17 -- setup/hugepages.sh@202 -- # setup output 00:05:12.851 06:31:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.851 06:31:17 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:14.228 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:14.228 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:14.228 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:14.228 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:14.228 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:14.228 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:14.228 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:14.228 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:14.228 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:14.228 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:14.228 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:14.228 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:14.228 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:14.228 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:14.228 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:14.228 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:14.228 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:14.228 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:14.228 06:31:18 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:14.228 06:31:18 -- setup/hugepages.sh@89 -- # local node 00:05:14.228 06:31:18 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:14.228 06:31:18 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:14.228 06:31:18 -- setup/hugepages.sh@92 -- # local surp 00:05:14.228 06:31:18 -- setup/hugepages.sh@93 -- # local resv 00:05:14.228 06:31:18 -- setup/hugepages.sh@94 -- # local anon 00:05:14.228 06:31:18 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:14.228 06:31:18 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:14.228 06:31:18 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:14.228 06:31:18 -- setup/common.sh@18 -- # local node= 00:05:14.228 06:31:18 -- setup/common.sh@19 -- # local var val 00:05:14.228 06:31:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:14.228 06:31:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.228 06:31:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.228 06:31:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.228 06:31:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.228 06:31:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.228 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.228 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.228 06:31:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39781604 kB' 'MemAvailable: 43807760 kB' 'Buffers: 2696 kB' 'Cached: 15990868 kB' 'SwapCached: 0 kB' 'Active: 12821568 kB' 'Inactive: 3662132 kB' 'Active(anon): 12254836 kB' 'Inactive(anon): 0 kB' 'Active(file): 566732 kB' 'Inactive(file): 3662132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493432 kB' 'Mapped: 215840 kB' 'Shmem: 11764700 kB' 'KReclaimable: 441728 kB' 'Slab: 843724 kB' 'SReclaimable: 441728 kB' 'SUnreclaim: 401996 kB' 'KernelStack: 12848 kB' 'PageTables: 8420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13377648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197244 kB' 'VmallocChunk: 0 kB' 'Percpu: 40320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2563676 kB' 'DirectMap2M: 24619008 kB' 'DirectMap1G: 41943040 kB' 00:05:14.228 06:31:18 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.228 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.228 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.228 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.229 06:31:18 -- setup/common.sh@33 -- # echo 0 00:05:14.229 06:31:18 -- setup/common.sh@33 -- # return 0 00:05:14.229 06:31:18 -- setup/hugepages.sh@97 -- # anon=0 00:05:14.229 06:31:18 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:14.229 06:31:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.229 06:31:18 -- setup/common.sh@18 -- # local node= 00:05:14.229 06:31:18 -- setup/common.sh@19 -- # local var val 00:05:14.229 06:31:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:14.229 06:31:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.229 06:31:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.229 06:31:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.229 06:31:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.229 06:31:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39782708 kB' 'MemAvailable: 43808864 kB' 'Buffers: 2696 kB' 'Cached: 15990868 kB' 'SwapCached: 0 kB' 'Active: 12821972 kB' 'Inactive: 3662132 kB' 'Active(anon): 12255240 kB' 'Inactive(anon): 0 kB' 'Active(file): 566732 kB' 'Inactive(file): 3662132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493824 kB' 'Mapped: 215840 kB' 'Shmem: 11764700 kB' 'KReclaimable: 441728 kB' 'Slab: 843716 kB' 'SReclaimable: 441728 kB' 'SUnreclaim: 401988 kB' 'KernelStack: 12816 kB' 'PageTables: 8308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13377660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197212 kB' 'VmallocChunk: 0 kB' 'Percpu: 40320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2563676 kB' 'DirectMap2M: 24619008 kB' 'DirectMap1G: 41943040 kB' 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.229 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.229 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.230 06:31:18 -- setup/common.sh@33 -- # echo 0 00:05:14.230 06:31:18 -- setup/common.sh@33 -- # return 0 00:05:14.230 06:31:18 -- setup/hugepages.sh@99 -- # surp=0 00:05:14.230 06:31:18 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:14.230 06:31:18 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:14.230 06:31:18 -- setup/common.sh@18 -- # local node= 00:05:14.230 06:31:18 -- setup/common.sh@19 -- # local var val 00:05:14.230 06:31:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:14.230 06:31:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.230 06:31:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.230 06:31:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.230 06:31:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.230 06:31:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39782708 kB' 'MemAvailable: 43808864 kB' 'Buffers: 2696 kB' 'Cached: 15990884 kB' 'SwapCached: 0 kB' 'Active: 12821488 kB' 'Inactive: 3662132 kB' 'Active(anon): 12254756 kB' 'Inactive(anon): 0 kB' 'Active(file): 566732 kB' 'Inactive(file): 3662132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493308 kB' 'Mapped: 215768 kB' 'Shmem: 11764716 kB' 'KReclaimable: 441728 kB' 'Slab: 843716 kB' 'SReclaimable: 441728 kB' 'SUnreclaim: 401988 kB' 'KernelStack: 12848 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13377672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197244 kB' 'VmallocChunk: 0 kB' 'Percpu: 40320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2563676 kB' 'DirectMap2M: 24619008 kB' 'DirectMap1G: 41943040 kB' 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.230 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.230 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.231 06:31:18 -- setup/common.sh@33 -- # echo 0 00:05:14.231 06:31:18 -- setup/common.sh@33 -- # return 0 00:05:14.231 06:31:18 -- setup/hugepages.sh@100 -- # resv=0 00:05:14.231 06:31:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:14.231 nr_hugepages=1024 00:05:14.231 06:31:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:14.231 resv_hugepages=0 00:05:14.231 06:31:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:14.231 surplus_hugepages=0 00:05:14.231 06:31:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:14.231 anon_hugepages=0 00:05:14.231 06:31:18 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:14.231 06:31:18 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:14.231 06:31:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:14.231 06:31:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:14.231 06:31:18 -- setup/common.sh@18 -- # local node= 00:05:14.231 06:31:18 -- setup/common.sh@19 -- # local var val 00:05:14.231 06:31:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:14.231 06:31:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.231 06:31:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.231 06:31:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.231 06:31:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.231 06:31:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39788520 kB' 'MemAvailable: 43814676 kB' 'Buffers: 2696 kB' 'Cached: 15990900 kB' 'SwapCached: 0 kB' 'Active: 12821800 kB' 'Inactive: 3662132 kB' 'Active(anon): 12255068 kB' 'Inactive(anon): 0 kB' 'Active(file): 566732 kB' 'Inactive(file): 3662132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493572 kB' 'Mapped: 215692 kB' 'Shmem: 11764732 kB' 'KReclaimable: 441728 kB' 'Slab: 843684 kB' 'SReclaimable: 441728 kB' 'SUnreclaim: 401956 kB' 'KernelStack: 12880 kB' 'PageTables: 8164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13379940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197340 kB' 'VmallocChunk: 0 kB' 'Percpu: 40320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2563676 kB' 'DirectMap2M: 24619008 kB' 'DirectMap1G: 41943040 kB' 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.231 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.231 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.232 06:31:18 -- setup/common.sh@33 -- # echo 1024 00:05:14.232 06:31:18 -- setup/common.sh@33 -- # return 0 00:05:14.232 06:31:18 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:14.232 06:31:18 -- setup/hugepages.sh@112 -- # get_nodes 00:05:14.232 06:31:18 -- setup/hugepages.sh@27 -- # local node 00:05:14.232 06:31:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:14.232 06:31:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:14.232 06:31:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:14.232 06:31:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:14.232 06:31:18 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:14.232 06:31:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:14.232 06:31:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:14.232 06:31:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:14.232 06:31:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:14.232 06:31:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.232 06:31:18 -- setup/common.sh@18 -- # local node=0 00:05:14.232 06:31:18 -- setup/common.sh@19 -- # local var val 00:05:14.232 06:31:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:14.232 06:31:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.232 06:31:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:14.232 06:31:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:14.232 06:31:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.232 06:31:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 16008584 kB' 'MemUsed: 16821300 kB' 'SwapCached: 0 kB' 'Active: 9863636 kB' 'Inactive: 3496876 kB' 'Active(anon): 9480264 kB' 'Inactive(anon): 0 kB' 'Active(file): 383372 kB' 'Inactive(file): 3496876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 13058104 kB' 'Mapped: 141760 kB' 'AnonPages: 306024 kB' 'Shmem: 9177856 kB' 'KernelStack: 7272 kB' 'PageTables: 6072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 290008 kB' 'Slab: 523496 kB' 'SReclaimable: 290008 kB' 'SUnreclaim: 233488 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # continue 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.232 06:31:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.232 06:31:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.232 06:31:18 -- setup/common.sh@33 -- # echo 0 00:05:14.232 06:31:18 -- setup/common.sh@33 -- # return 0 00:05:14.232 06:31:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:14.232 06:31:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:14.232 06:31:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:14.232 06:31:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:14.232 06:31:18 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:14.232 node0=1024 expecting 1024 00:05:14.232 06:31:18 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:14.232 00:05:14.232 real 0m2.729s 00:05:14.232 user 0m1.115s 00:05:14.232 sys 0m1.535s 00:05:14.232 06:31:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:14.232 06:31:18 -- common/autotest_common.sh@10 -- # set +x 00:05:14.232 ************************************ 00:05:14.232 END TEST no_shrink_alloc 00:05:14.232 ************************************ 00:05:14.232 06:31:18 -- setup/hugepages.sh@217 -- # clear_hp 00:05:14.232 06:31:18 -- setup/hugepages.sh@37 -- # local node hp 00:05:14.232 06:31:18 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:14.232 06:31:18 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:14.232 06:31:18 -- setup/hugepages.sh@41 -- # echo 0 00:05:14.232 06:31:18 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:14.232 06:31:18 -- setup/hugepages.sh@41 -- # echo 0 00:05:14.232 06:31:18 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:14.232 06:31:18 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:14.232 06:31:18 -- setup/hugepages.sh@41 -- # echo 0 00:05:14.232 06:31:18 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:14.232 06:31:18 -- setup/hugepages.sh@41 -- # echo 0 00:05:14.232 06:31:18 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:14.232 06:31:18 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:14.232 00:05:14.232 real 0m11.553s 00:05:14.232 user 0m4.493s 00:05:14.232 sys 0m5.783s 00:05:14.233 06:31:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:14.233 06:31:18 -- common/autotest_common.sh@10 -- # set +x 00:05:14.233 ************************************ 00:05:14.233 END TEST hugepages 00:05:14.233 ************************************ 00:05:14.233 06:31:18 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:14.233 06:31:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:14.233 06:31:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.233 06:31:18 -- common/autotest_common.sh@10 -- # set +x 00:05:14.489 ************************************ 00:05:14.489 START TEST driver 00:05:14.489 ************************************ 00:05:14.489 06:31:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:14.489 * Looking for test storage... 00:05:14.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:14.489 06:31:18 -- setup/driver.sh@68 -- # setup reset 00:05:14.489 06:31:18 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:14.489 06:31:18 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:17.013 06:31:21 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:17.013 06:31:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.013 06:31:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.013 06:31:21 -- common/autotest_common.sh@10 -- # set +x 00:05:17.013 ************************************ 00:05:17.013 START TEST guess_driver 00:05:17.013 ************************************ 00:05:17.013 06:31:21 -- common/autotest_common.sh@1111 -- # guess_driver 00:05:17.013 06:31:21 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:17.013 06:31:21 -- setup/driver.sh@47 -- # local fail=0 00:05:17.013 06:31:21 -- setup/driver.sh@49 -- # pick_driver 00:05:17.013 06:31:21 -- setup/driver.sh@36 -- # vfio 00:05:17.013 06:31:21 -- setup/driver.sh@21 -- # local iommu_grups 00:05:17.013 06:31:21 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:17.013 06:31:21 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:17.013 06:31:21 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:17.013 06:31:21 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:17.013 06:31:21 -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:05:17.013 06:31:21 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:17.013 06:31:21 -- setup/driver.sh@14 -- # mod vfio_pci 00:05:17.013 06:31:21 -- setup/driver.sh@12 -- # dep vfio_pci 00:05:17.013 06:31:21 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:17.013 06:31:21 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:17.013 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:17.013 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:17.013 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:17.013 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:17.013 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:17.013 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:17.013 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:17.013 06:31:21 -- setup/driver.sh@30 -- # return 0 00:05:17.013 06:31:21 -- setup/driver.sh@37 -- # echo vfio-pci 00:05:17.013 06:31:21 -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:17.013 06:31:21 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:17.013 06:31:21 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:17.013 Looking for driver=vfio-pci 00:05:17.014 06:31:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.014 06:31:21 -- setup/driver.sh@45 -- # setup output config 00:05:17.014 06:31:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.014 06:31:21 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:17.949 06:31:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.949 06:31:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.949 06:31:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.949 06:31:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.949 06:31:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.949 06:31:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.949 06:31:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.949 06:31:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.949 06:31:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.949 06:31:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.949 06:31:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.949 06:31:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.949 06:31:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.949 06:31:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.949 06:31:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.949 06:31:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.949 06:31:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.949 06:31:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.949 06:31:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.949 06:31:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.949 06:31:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.949 06:31:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.949 06:31:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.949 06:31:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.949 06:31:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.949 06:31:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.949 06:31:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.949 06:31:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.949 06:31:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.949 06:31:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.949 06:31:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.949 06:31:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.949 06:31:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.949 06:31:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.949 06:31:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.949 06:31:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.949 06:31:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.949 06:31:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.949 06:31:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.949 06:31:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.949 06:31:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.949 06:31:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.949 06:31:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.949 06:31:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.949 06:31:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.949 06:31:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.949 06:31:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.949 06:31:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:18.883 06:31:23 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:18.883 06:31:23 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:18.883 06:31:23 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:19.140 06:31:23 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:19.140 06:31:23 -- setup/driver.sh@65 -- # setup reset 00:05:19.140 06:31:23 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:19.140 06:31:23 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:21.883 00:05:21.883 real 0m4.503s 00:05:21.883 user 0m0.948s 00:05:21.883 sys 0m1.715s 00:05:21.883 06:31:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:21.883 06:31:25 -- common/autotest_common.sh@10 -- # set +x 00:05:21.883 ************************************ 00:05:21.883 END TEST guess_driver 00:05:21.883 ************************************ 00:05:21.883 00:05:21.883 real 0m6.975s 00:05:21.883 user 0m1.521s 00:05:21.883 sys 0m2.738s 00:05:21.883 06:31:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:21.883 06:31:25 -- common/autotest_common.sh@10 -- # set +x 00:05:21.883 ************************************ 00:05:21.883 END TEST driver 00:05:21.883 ************************************ 00:05:21.883 06:31:25 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:21.883 06:31:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:21.883 06:31:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.883 06:31:25 -- common/autotest_common.sh@10 -- # set +x 00:05:21.883 ************************************ 00:05:21.883 START TEST devices 00:05:21.883 ************************************ 00:05:21.883 06:31:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:21.883 * Looking for test storage... 00:05:21.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:21.883 06:31:26 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:21.883 06:31:26 -- setup/devices.sh@192 -- # setup reset 00:05:21.883 06:31:26 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:21.883 06:31:26 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:22.818 06:31:27 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:22.818 06:31:27 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:22.818 06:31:27 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:22.818 06:31:27 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:22.818 06:31:27 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:22.818 06:31:27 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:22.818 06:31:27 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:22.819 06:31:27 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:22.819 06:31:27 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:22.819 06:31:27 -- setup/devices.sh@196 -- # blocks=() 00:05:22.819 06:31:27 -- setup/devices.sh@196 -- # declare -a blocks 00:05:22.819 06:31:27 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:22.819 06:31:27 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:22.819 06:31:27 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:22.819 06:31:27 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:22.819 06:31:27 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:22.819 06:31:27 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:22.819 06:31:27 -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:05:22.819 06:31:27 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:05:22.819 06:31:27 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:22.819 06:31:27 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:22.819 06:31:27 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:23.077 No valid GPT data, bailing 00:05:23.077 06:31:27 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:23.077 06:31:27 -- scripts/common.sh@391 -- # pt= 00:05:23.077 06:31:27 -- scripts/common.sh@392 -- # return 1 00:05:23.077 06:31:27 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:23.077 06:31:27 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:23.077 06:31:27 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:23.077 06:31:27 -- setup/common.sh@80 -- # echo 1000204886016 00:05:23.077 06:31:27 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:23.077 06:31:27 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:23.077 06:31:27 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:05:23.077 06:31:27 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:23.077 06:31:27 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:23.077 06:31:27 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:23.077 06:31:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.077 06:31:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.077 06:31:27 -- common/autotest_common.sh@10 -- # set +x 00:05:23.077 ************************************ 00:05:23.077 START TEST nvme_mount 00:05:23.077 ************************************ 00:05:23.077 06:31:27 -- common/autotest_common.sh@1111 -- # nvme_mount 00:05:23.077 06:31:27 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:23.077 06:31:27 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:23.077 06:31:27 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.077 06:31:27 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:23.077 06:31:27 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:23.077 06:31:27 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:23.077 06:31:27 -- setup/common.sh@40 -- # local part_no=1 00:05:23.077 06:31:27 -- setup/common.sh@41 -- # local size=1073741824 00:05:23.077 06:31:27 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:23.077 06:31:27 -- setup/common.sh@44 -- # parts=() 00:05:23.077 06:31:27 -- setup/common.sh@44 -- # local parts 00:05:23.077 06:31:27 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:23.077 06:31:27 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:23.077 06:31:27 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:23.077 06:31:27 -- setup/common.sh@46 -- # (( part++ )) 00:05:23.077 06:31:27 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:23.077 06:31:27 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:23.077 06:31:27 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:23.077 06:31:27 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:24.013 Creating new GPT entries in memory. 00:05:24.013 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:24.013 other utilities. 00:05:24.013 06:31:28 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:24.013 06:31:28 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:24.013 06:31:28 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:24.013 06:31:28 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:24.013 06:31:28 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:25.389 Creating new GPT entries in memory. 00:05:25.389 The operation has completed successfully. 00:05:25.389 06:31:29 -- setup/common.sh@57 -- # (( part++ )) 00:05:25.389 06:31:29 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:25.389 06:31:29 -- setup/common.sh@62 -- # wait 4049461 00:05:25.389 06:31:29 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.389 06:31:29 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:25.389 06:31:29 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.389 06:31:29 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:25.389 06:31:29 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:25.389 06:31:29 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.389 06:31:29 -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:25.389 06:31:29 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:25.389 06:31:29 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:25.389 06:31:29 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.389 06:31:29 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:25.389 06:31:29 -- setup/devices.sh@53 -- # local found=0 00:05:25.389 06:31:29 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:25.389 06:31:29 -- setup/devices.sh@56 -- # : 00:05:25.389 06:31:29 -- setup/devices.sh@59 -- # local pci status 00:05:25.389 06:31:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.389 06:31:29 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:25.389 06:31:29 -- setup/devices.sh@47 -- # setup output config 00:05:25.389 06:31:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.389 06:31:29 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:26.324 06:31:30 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.324 06:31:30 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:26.324 06:31:30 -- setup/devices.sh@63 -- # found=1 00:05:26.324 06:31:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.324 06:31:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.324 06:31:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.324 06:31:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.324 06:31:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.324 06:31:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.324 06:31:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.324 06:31:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.324 06:31:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.324 06:31:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.324 06:31:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.324 06:31:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.324 06:31:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.324 06:31:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.324 06:31:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.324 06:31:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.324 06:31:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.324 06:31:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.324 06:31:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.324 06:31:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.324 06:31:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.324 06:31:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.324 06:31:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.324 06:31:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.324 06:31:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.324 06:31:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.324 06:31:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.324 06:31:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.324 06:31:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.324 06:31:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.324 06:31:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.324 06:31:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.324 06:31:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.324 06:31:30 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:26.324 06:31:30 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:26.324 06:31:30 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:26.324 06:31:30 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:26.324 06:31:30 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:26.324 06:31:30 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:26.324 06:31:30 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:26.324 06:31:30 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:26.324 06:31:30 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:26.324 06:31:30 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:26.324 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:26.324 06:31:30 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:26.324 06:31:30 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:26.582 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:26.582 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:26.582 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:26.582 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:26.582 06:31:31 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:26.582 06:31:31 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:26.582 06:31:31 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:26.582 06:31:31 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:26.582 06:31:31 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:26.582 06:31:31 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:26.582 06:31:31 -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:26.582 06:31:31 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:26.582 06:31:31 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:26.582 06:31:31 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:26.582 06:31:31 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:26.582 06:31:31 -- setup/devices.sh@53 -- # local found=0 00:05:26.582 06:31:31 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:26.582 06:31:31 -- setup/devices.sh@56 -- # : 00:05:26.582 06:31:31 -- setup/devices.sh@59 -- # local pci status 00:05:26.582 06:31:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.582 06:31:31 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:26.582 06:31:31 -- setup/devices.sh@47 -- # setup output config 00:05:26.582 06:31:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.582 06:31:31 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:27.988 06:31:32 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.989 06:31:32 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:27.989 06:31:32 -- setup/devices.sh@63 -- # found=1 00:05:27.989 06:31:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.989 06:31:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.989 06:31:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.989 06:31:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.989 06:31:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.989 06:31:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.989 06:31:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.989 06:31:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.989 06:31:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.989 06:31:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.989 06:31:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.989 06:31:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.989 06:31:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.989 06:31:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.989 06:31:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.989 06:31:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.989 06:31:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.989 06:31:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.989 06:31:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.989 06:31:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.989 06:31:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.989 06:31:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.989 06:31:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.989 06:31:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.989 06:31:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.989 06:31:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.989 06:31:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.989 06:31:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.989 06:31:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.989 06:31:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.989 06:31:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.989 06:31:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.989 06:31:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.989 06:31:32 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:27.989 06:31:32 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:27.989 06:31:32 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:27.989 06:31:32 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:27.989 06:31:32 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:27.989 06:31:32 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:27.989 06:31:32 -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:05:27.989 06:31:32 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:27.989 06:31:32 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:27.989 06:31:32 -- setup/devices.sh@50 -- # local mount_point= 00:05:27.989 06:31:32 -- setup/devices.sh@51 -- # local test_file= 00:05:27.989 06:31:32 -- setup/devices.sh@53 -- # local found=0 00:05:27.989 06:31:32 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:27.989 06:31:32 -- setup/devices.sh@59 -- # local pci status 00:05:27.989 06:31:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.989 06:31:32 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:27.989 06:31:32 -- setup/devices.sh@47 -- # setup output config 00:05:27.989 06:31:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.989 06:31:32 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:28.922 06:31:33 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.922 06:31:33 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:28.922 06:31:33 -- setup/devices.sh@63 -- # found=1 00:05:28.922 06:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.922 06:31:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.922 06:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.922 06:31:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.922 06:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.922 06:31:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.922 06:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.922 06:31:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.922 06:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.922 06:31:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.922 06:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.922 06:31:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.922 06:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.922 06:31:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.922 06:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.922 06:31:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.923 06:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.923 06:31:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.923 06:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.923 06:31:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.923 06:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.923 06:31:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.923 06:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.923 06:31:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.923 06:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.923 06:31:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.923 06:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.923 06:31:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.923 06:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.923 06:31:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.923 06:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.923 06:31:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:28.923 06:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.181 06:31:33 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:29.181 06:31:33 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:29.181 06:31:33 -- setup/devices.sh@68 -- # return 0 00:05:29.181 06:31:33 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:29.181 06:31:33 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:29.181 06:31:33 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:29.181 06:31:33 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:29.181 06:31:33 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:29.181 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:29.181 00:05:29.181 real 0m6.054s 00:05:29.181 user 0m1.434s 00:05:29.181 sys 0m2.195s 00:05:29.181 06:31:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:29.181 06:31:33 -- common/autotest_common.sh@10 -- # set +x 00:05:29.181 ************************************ 00:05:29.181 END TEST nvme_mount 00:05:29.181 ************************************ 00:05:29.181 06:31:33 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:29.181 06:31:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.181 06:31:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.181 06:31:33 -- common/autotest_common.sh@10 -- # set +x 00:05:29.181 ************************************ 00:05:29.181 START TEST dm_mount 00:05:29.181 ************************************ 00:05:29.181 06:31:33 -- common/autotest_common.sh@1111 -- # dm_mount 00:05:29.181 06:31:33 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:29.181 06:31:33 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:29.181 06:31:33 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:29.181 06:31:33 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:29.181 06:31:33 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:29.181 06:31:33 -- setup/common.sh@40 -- # local part_no=2 00:05:29.181 06:31:33 -- setup/common.sh@41 -- # local size=1073741824 00:05:29.181 06:31:33 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:29.181 06:31:33 -- setup/common.sh@44 -- # parts=() 00:05:29.181 06:31:33 -- setup/common.sh@44 -- # local parts 00:05:29.181 06:31:33 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:29.181 06:31:33 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:29.181 06:31:33 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:29.181 06:31:33 -- setup/common.sh@46 -- # (( part++ )) 00:05:29.181 06:31:33 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:29.181 06:31:33 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:29.181 06:31:33 -- setup/common.sh@46 -- # (( part++ )) 00:05:29.181 06:31:33 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:29.181 06:31:33 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:29.181 06:31:33 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:29.181 06:31:33 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:30.556 Creating new GPT entries in memory. 00:05:30.556 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:30.556 other utilities. 00:05:30.556 06:31:34 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:30.556 06:31:34 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:30.556 06:31:34 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:30.556 06:31:34 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:30.556 06:31:34 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:31.490 Creating new GPT entries in memory. 00:05:31.490 The operation has completed successfully. 00:05:31.490 06:31:35 -- setup/common.sh@57 -- # (( part++ )) 00:05:31.490 06:31:35 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:31.490 06:31:35 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:31.490 06:31:35 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:31.490 06:31:35 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:32.425 The operation has completed successfully. 00:05:32.425 06:31:36 -- setup/common.sh@57 -- # (( part++ )) 00:05:32.425 06:31:36 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:32.425 06:31:36 -- setup/common.sh@62 -- # wait 4051798 00:05:32.425 06:31:36 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:32.425 06:31:36 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:32.425 06:31:36 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:32.425 06:31:36 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:32.425 06:31:36 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:32.425 06:31:36 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:32.425 06:31:36 -- setup/devices.sh@161 -- # break 00:05:32.425 06:31:36 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:32.425 06:31:36 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:32.425 06:31:36 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:32.425 06:31:36 -- setup/devices.sh@166 -- # dm=dm-0 00:05:32.425 06:31:36 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:32.425 06:31:36 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:32.425 06:31:36 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:32.425 06:31:36 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:32.425 06:31:36 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:32.425 06:31:36 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:32.425 06:31:36 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:32.425 06:31:36 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:32.425 06:31:36 -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:32.425 06:31:36 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:32.425 06:31:36 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:32.425 06:31:36 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:32.425 06:31:36 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:32.425 06:31:36 -- setup/devices.sh@53 -- # local found=0 00:05:32.425 06:31:36 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:32.425 06:31:36 -- setup/devices.sh@56 -- # : 00:05:32.425 06:31:36 -- setup/devices.sh@59 -- # local pci status 00:05:32.425 06:31:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.425 06:31:36 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:32.425 06:31:36 -- setup/devices.sh@47 -- # setup output config 00:05:32.425 06:31:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.425 06:31:36 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:33.360 06:31:37 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.360 06:31:37 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:33.360 06:31:37 -- setup/devices.sh@63 -- # found=1 00:05:33.360 06:31:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.360 06:31:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.360 06:31:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.360 06:31:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.360 06:31:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.360 06:31:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.360 06:31:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.360 06:31:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.360 06:31:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.360 06:31:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.360 06:31:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.360 06:31:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.360 06:31:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.360 06:31:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.360 06:31:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.360 06:31:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.360 06:31:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.360 06:31:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.360 06:31:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.360 06:31:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.360 06:31:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.360 06:31:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.360 06:31:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.360 06:31:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.360 06:31:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.360 06:31:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.360 06:31:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.360 06:31:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.360 06:31:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.360 06:31:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.360 06:31:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.360 06:31:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.360 06:31:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.618 06:31:38 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:33.618 06:31:38 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:33.618 06:31:38 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:33.618 06:31:38 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:33.618 06:31:38 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:33.618 06:31:38 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:33.618 06:31:38 -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:33.618 06:31:38 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:33.618 06:31:38 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:33.618 06:31:38 -- setup/devices.sh@50 -- # local mount_point= 00:05:33.618 06:31:38 -- setup/devices.sh@51 -- # local test_file= 00:05:33.618 06:31:38 -- setup/devices.sh@53 -- # local found=0 00:05:33.618 06:31:38 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:33.618 06:31:38 -- setup/devices.sh@59 -- # local pci status 00:05:33.618 06:31:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.618 06:31:38 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:33.618 06:31:38 -- setup/devices.sh@47 -- # setup output config 00:05:33.618 06:31:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:33.618 06:31:38 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:34.553 06:31:39 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.553 06:31:39 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:34.554 06:31:39 -- setup/devices.sh@63 -- # found=1 00:05:34.554 06:31:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.554 06:31:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.554 06:31:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.554 06:31:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.554 06:31:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.554 06:31:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.554 06:31:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.554 06:31:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.554 06:31:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.554 06:31:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.554 06:31:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.554 06:31:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.554 06:31:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.554 06:31:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.554 06:31:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.554 06:31:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.554 06:31:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.554 06:31:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.554 06:31:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.554 06:31:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.554 06:31:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.554 06:31:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.554 06:31:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.554 06:31:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.554 06:31:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.554 06:31:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.554 06:31:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.554 06:31:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.554 06:31:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.554 06:31:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.554 06:31:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.554 06:31:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.554 06:31:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.812 06:31:39 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:34.812 06:31:39 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:34.812 06:31:39 -- setup/devices.sh@68 -- # return 0 00:05:34.812 06:31:39 -- setup/devices.sh@187 -- # cleanup_dm 00:05:34.812 06:31:39 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:34.812 06:31:39 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:34.812 06:31:39 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:34.812 06:31:39 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:34.812 06:31:39 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:34.812 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:34.812 06:31:39 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:34.812 06:31:39 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:34.812 00:05:34.812 real 0m5.575s 00:05:34.812 user 0m0.934s 00:05:34.812 sys 0m1.517s 00:05:34.812 06:31:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:34.812 06:31:39 -- common/autotest_common.sh@10 -- # set +x 00:05:34.812 ************************************ 00:05:34.812 END TEST dm_mount 00:05:34.812 ************************************ 00:05:34.812 06:31:39 -- setup/devices.sh@1 -- # cleanup 00:05:34.812 06:31:39 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:34.813 06:31:39 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:34.813 06:31:39 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:34.813 06:31:39 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:34.813 06:31:39 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:34.813 06:31:39 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:35.072 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:35.072 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:35.072 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:35.072 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:35.072 06:31:39 -- setup/devices.sh@12 -- # cleanup_dm 00:05:35.072 06:31:39 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:35.072 06:31:39 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:35.072 06:31:39 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:35.072 06:31:39 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:35.072 06:31:39 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:35.072 06:31:39 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:35.072 00:05:35.072 real 0m13.654s 00:05:35.072 user 0m3.075s 00:05:35.072 sys 0m4.778s 00:05:35.072 06:31:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:35.072 06:31:39 -- common/autotest_common.sh@10 -- # set +x 00:05:35.072 ************************************ 00:05:35.072 END TEST devices 00:05:35.072 ************************************ 00:05:35.072 00:05:35.072 real 0m42.769s 00:05:35.072 user 0m12.381s 00:05:35.072 sys 0m18.645s 00:05:35.072 06:31:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:35.072 06:31:39 -- common/autotest_common.sh@10 -- # set +x 00:05:35.072 ************************************ 00:05:35.072 END TEST setup.sh 00:05:35.072 ************************************ 00:05:35.072 06:31:39 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:36.446 Hugepages 00:05:36.446 node hugesize free / total 00:05:36.446 node0 1048576kB 0 / 0 00:05:36.446 node0 2048kB 2048 / 2048 00:05:36.446 node1 1048576kB 0 / 0 00:05:36.446 node1 2048kB 0 / 0 00:05:36.446 00:05:36.446 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:36.446 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:36.446 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:36.446 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:36.446 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:36.446 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:36.446 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:36.446 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:36.446 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:36.446 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:36.446 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:36.446 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:36.446 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:36.446 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:36.446 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:36.446 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:36.446 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:36.446 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:36.446 06:31:40 -- spdk/autotest.sh@130 -- # uname -s 00:05:36.446 06:31:40 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:36.446 06:31:40 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:36.446 06:31:40 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:37.381 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:37.381 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:37.381 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:37.381 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:37.381 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:37.381 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:37.381 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:37.381 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:37.381 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:37.381 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:37.381 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:37.381 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:37.381 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:37.381 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:37.381 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:37.381 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:38.753 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:38.753 06:31:43 -- common/autotest_common.sh@1518 -- # sleep 1 00:05:39.686 06:31:44 -- common/autotest_common.sh@1519 -- # bdfs=() 00:05:39.686 06:31:44 -- common/autotest_common.sh@1519 -- # local bdfs 00:05:39.686 06:31:44 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:39.686 06:31:44 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:39.686 06:31:44 -- common/autotest_common.sh@1499 -- # bdfs=() 00:05:39.686 06:31:44 -- common/autotest_common.sh@1499 -- # local bdfs 00:05:39.686 06:31:44 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:39.686 06:31:44 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:39.686 06:31:44 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:05:39.686 06:31:44 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:05:39.686 06:31:44 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:88:00.0 00:05:39.686 06:31:44 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:40.619 Waiting for block devices as requested 00:05:40.878 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:40.878 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:40.878 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:40.878 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:41.136 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:41.136 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:41.136 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:41.136 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:41.396 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:41.396 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:41.396 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:41.396 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:41.658 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:41.658 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:41.658 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:41.916 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:41.916 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:41.916 06:31:46 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:41.916 06:31:46 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:41.916 06:31:46 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:05:41.916 06:31:46 -- common/autotest_common.sh@1488 -- # grep 0000:88:00.0/nvme/nvme 00:05:41.916 06:31:46 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:41.916 06:31:46 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:41.916 06:31:46 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:41.916 06:31:46 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:05:41.916 06:31:46 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:41.916 06:31:46 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:41.916 06:31:46 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:41.916 06:31:46 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:41.916 06:31:46 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:41.916 06:31:46 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:05:41.916 06:31:46 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:41.916 06:31:46 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:41.916 06:31:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:41.916 06:31:46 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:41.916 06:31:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:41.916 06:31:46 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:41.916 06:31:46 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:41.916 06:31:46 -- common/autotest_common.sh@1543 -- # continue 00:05:41.916 06:31:46 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:41.916 06:31:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:41.916 06:31:46 -- common/autotest_common.sh@10 -- # set +x 00:05:41.916 06:31:46 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:41.916 06:31:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:41.916 06:31:46 -- common/autotest_common.sh@10 -- # set +x 00:05:41.916 06:31:46 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:43.291 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:43.291 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:43.291 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:43.291 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:43.291 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:43.291 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:43.291 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:43.291 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:43.291 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:43.291 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:43.291 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:43.291 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:43.291 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:43.291 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:43.291 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:43.291 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:44.227 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:44.227 06:31:48 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:44.227 06:31:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:44.227 06:31:48 -- common/autotest_common.sh@10 -- # set +x 00:05:44.227 06:31:48 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:44.227 06:31:48 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:05:44.227 06:31:48 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:05:44.227 06:31:48 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:44.227 06:31:48 -- common/autotest_common.sh@1563 -- # local bdfs 00:05:44.227 06:31:48 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:05:44.227 06:31:48 -- common/autotest_common.sh@1499 -- # bdfs=() 00:05:44.227 06:31:48 -- common/autotest_common.sh@1499 -- # local bdfs 00:05:44.227 06:31:48 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:44.227 06:31:48 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:44.227 06:31:48 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:05:44.485 06:31:48 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:05:44.485 06:31:48 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:88:00.0 00:05:44.485 06:31:48 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:05:44.485 06:31:48 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:44.485 06:31:48 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:44.485 06:31:48 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:44.485 06:31:48 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:44.485 06:31:48 -- common/autotest_common.sh@1572 -- # printf '%s\n' 0000:88:00.0 00:05:44.485 06:31:48 -- common/autotest_common.sh@1578 -- # [[ -z 0000:88:00.0 ]] 00:05:44.485 06:31:48 -- common/autotest_common.sh@1583 -- # spdk_tgt_pid=4057030 00:05:44.485 06:31:48 -- common/autotest_common.sh@1582 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.485 06:31:48 -- common/autotest_common.sh@1584 -- # waitforlisten 4057030 00:05:44.485 06:31:48 -- common/autotest_common.sh@817 -- # '[' -z 4057030 ']' 00:05:44.485 06:31:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.485 06:31:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:44.485 06:31:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.485 06:31:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:44.485 06:31:48 -- common/autotest_common.sh@10 -- # set +x 00:05:44.485 [2024-04-17 06:31:48.926294] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:05:44.485 [2024-04-17 06:31:48.926395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4057030 ] 00:05:44.485 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.485 [2024-04-17 06:31:48.990222] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.485 [2024-04-17 06:31:49.077703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.743 06:31:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:44.743 06:31:49 -- common/autotest_common.sh@850 -- # return 0 00:05:44.743 06:31:49 -- common/autotest_common.sh@1586 -- # bdf_id=0 00:05:44.743 06:31:49 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:05:44.743 06:31:49 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:48.024 nvme0n1 00:05:48.025 06:31:52 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:48.025 [2024-04-17 06:31:52.627004] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:48.025 [2024-04-17 06:31:52.627049] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:48.025 request: 00:05:48.025 { 00:05:48.025 "nvme_ctrlr_name": "nvme0", 00:05:48.025 "password": "test", 00:05:48.025 "method": "bdev_nvme_opal_revert", 00:05:48.025 "req_id": 1 00:05:48.025 } 00:05:48.025 Got JSON-RPC error response 00:05:48.025 response: 00:05:48.025 { 00:05:48.025 "code": -32603, 00:05:48.025 "message": "Internal error" 00:05:48.025 } 00:05:48.283 06:31:52 -- common/autotest_common.sh@1590 -- # true 00:05:48.283 06:31:52 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:05:48.283 06:31:52 -- common/autotest_common.sh@1594 -- # killprocess 4057030 00:05:48.283 06:31:52 -- common/autotest_common.sh@936 -- # '[' -z 4057030 ']' 00:05:48.283 06:31:52 -- common/autotest_common.sh@940 -- # kill -0 4057030 00:05:48.283 06:31:52 -- common/autotest_common.sh@941 -- # uname 00:05:48.283 06:31:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:48.283 06:31:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4057030 00:05:48.283 06:31:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:48.283 06:31:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:48.283 06:31:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4057030' 00:05:48.283 killing process with pid 4057030 00:05:48.283 06:31:52 -- common/autotest_common.sh@955 -- # kill 4057030 00:05:48.283 06:31:52 -- common/autotest_common.sh@960 -- # wait 4057030 00:05:50.182 06:31:54 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:50.182 06:31:54 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:50.182 06:31:54 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:50.182 06:31:54 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:50.182 06:31:54 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:50.182 06:31:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:50.182 06:31:54 -- common/autotest_common.sh@10 -- # set +x 00:05:50.182 06:31:54 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:50.182 06:31:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:50.182 06:31:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.182 06:31:54 -- common/autotest_common.sh@10 -- # set +x 00:05:50.182 ************************************ 00:05:50.182 START TEST env 00:05:50.182 ************************************ 00:05:50.182 06:31:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:50.182 * Looking for test storage... 00:05:50.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:50.182 06:31:54 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:50.182 06:31:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:50.182 06:31:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.182 06:31:54 -- common/autotest_common.sh@10 -- # set +x 00:05:50.182 ************************************ 00:05:50.182 START TEST env_memory 00:05:50.182 ************************************ 00:05:50.182 06:31:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:50.182 00:05:50.182 00:05:50.182 CUnit - A unit testing framework for C - Version 2.1-3 00:05:50.182 http://cunit.sourceforge.net/ 00:05:50.182 00:05:50.182 00:05:50.182 Suite: memory 00:05:50.182 Test: alloc and free memory map ...[2024-04-17 06:31:54.683251] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:50.182 passed 00:05:50.182 Test: mem map translation ...[2024-04-17 06:31:54.703337] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:50.182 [2024-04-17 06:31:54.703360] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:50.182 [2024-04-17 06:31:54.703417] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:50.182 [2024-04-17 06:31:54.703429] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:50.182 passed 00:05:50.182 Test: mem map registration ...[2024-04-17 06:31:54.745934] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:50.182 [2024-04-17 06:31:54.745954] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:50.182 passed 00:05:50.440 Test: mem map adjacent registrations ...passed 00:05:50.440 00:05:50.440 Run Summary: Type Total Ran Passed Failed Inactive 00:05:50.440 suites 1 1 n/a 0 0 00:05:50.440 tests 4 4 4 0 0 00:05:50.440 asserts 152 152 152 0 n/a 00:05:50.440 00:05:50.440 Elapsed time = 0.141 seconds 00:05:50.440 00:05:50.440 real 0m0.148s 00:05:50.440 user 0m0.139s 00:05:50.440 sys 0m0.008s 00:05:50.440 06:31:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:50.440 06:31:54 -- common/autotest_common.sh@10 -- # set +x 00:05:50.440 ************************************ 00:05:50.440 END TEST env_memory 00:05:50.440 ************************************ 00:05:50.440 06:31:54 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:50.440 06:31:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:50.440 06:31:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.440 06:31:54 -- common/autotest_common.sh@10 -- # set +x 00:05:50.440 ************************************ 00:05:50.440 START TEST env_vtophys 00:05:50.440 ************************************ 00:05:50.440 06:31:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:50.440 EAL: lib.eal log level changed from notice to debug 00:05:50.440 EAL: Detected lcore 0 as core 0 on socket 0 00:05:50.440 EAL: Detected lcore 1 as core 1 on socket 0 00:05:50.440 EAL: Detected lcore 2 as core 2 on socket 0 00:05:50.440 EAL: Detected lcore 3 as core 3 on socket 0 00:05:50.440 EAL: Detected lcore 4 as core 4 on socket 0 00:05:50.440 EAL: Detected lcore 5 as core 5 on socket 0 00:05:50.440 EAL: Detected lcore 6 as core 8 on socket 0 00:05:50.440 EAL: Detected lcore 7 as core 9 on socket 0 00:05:50.440 EAL: Detected lcore 8 as core 10 on socket 0 00:05:50.440 EAL: Detected lcore 9 as core 11 on socket 0 00:05:50.440 EAL: Detected lcore 10 as core 12 on socket 0 00:05:50.440 EAL: Detected lcore 11 as core 13 on socket 0 00:05:50.440 EAL: Detected lcore 12 as core 0 on socket 1 00:05:50.440 EAL: Detected lcore 13 as core 1 on socket 1 00:05:50.440 EAL: Detected lcore 14 as core 2 on socket 1 00:05:50.440 EAL: Detected lcore 15 as core 3 on socket 1 00:05:50.440 EAL: Detected lcore 16 as core 4 on socket 1 00:05:50.440 EAL: Detected lcore 17 as core 5 on socket 1 00:05:50.440 EAL: Detected lcore 18 as core 8 on socket 1 00:05:50.440 EAL: Detected lcore 19 as core 9 on socket 1 00:05:50.440 EAL: Detected lcore 20 as core 10 on socket 1 00:05:50.440 EAL: Detected lcore 21 as core 11 on socket 1 00:05:50.440 EAL: Detected lcore 22 as core 12 on socket 1 00:05:50.440 EAL: Detected lcore 23 as core 13 on socket 1 00:05:50.440 EAL: Detected lcore 24 as core 0 on socket 0 00:05:50.440 EAL: Detected lcore 25 as core 1 on socket 0 00:05:50.440 EAL: Detected lcore 26 as core 2 on socket 0 00:05:50.440 EAL: Detected lcore 27 as core 3 on socket 0 00:05:50.440 EAL: Detected lcore 28 as core 4 on socket 0 00:05:50.440 EAL: Detected lcore 29 as core 5 on socket 0 00:05:50.440 EAL: Detected lcore 30 as core 8 on socket 0 00:05:50.440 EAL: Detected lcore 31 as core 9 on socket 0 00:05:50.440 EAL: Detected lcore 32 as core 10 on socket 0 00:05:50.440 EAL: Detected lcore 33 as core 11 on socket 0 00:05:50.440 EAL: Detected lcore 34 as core 12 on socket 0 00:05:50.440 EAL: Detected lcore 35 as core 13 on socket 0 00:05:50.440 EAL: Detected lcore 36 as core 0 on socket 1 00:05:50.440 EAL: Detected lcore 37 as core 1 on socket 1 00:05:50.440 EAL: Detected lcore 38 as core 2 on socket 1 00:05:50.440 EAL: Detected lcore 39 as core 3 on socket 1 00:05:50.440 EAL: Detected lcore 40 as core 4 on socket 1 00:05:50.440 EAL: Detected lcore 41 as core 5 on socket 1 00:05:50.440 EAL: Detected lcore 42 as core 8 on socket 1 00:05:50.440 EAL: Detected lcore 43 as core 9 on socket 1 00:05:50.440 EAL: Detected lcore 44 as core 10 on socket 1 00:05:50.440 EAL: Detected lcore 45 as core 11 on socket 1 00:05:50.440 EAL: Detected lcore 46 as core 12 on socket 1 00:05:50.440 EAL: Detected lcore 47 as core 13 on socket 1 00:05:50.440 EAL: Maximum logical cores by configuration: 128 00:05:50.440 EAL: Detected CPU lcores: 48 00:05:50.440 EAL: Detected NUMA nodes: 2 00:05:50.440 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:50.440 EAL: Detected shared linkage of DPDK 00:05:50.440 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:50.440 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:50.440 EAL: Registered [vdev] bus. 00:05:50.440 EAL: bus.vdev log level changed from disabled to notice 00:05:50.440 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:50.440 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:50.440 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:50.440 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:50.440 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:50.440 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:50.440 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:50.440 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:50.440 EAL: No shared files mode enabled, IPC will be disabled 00:05:50.440 EAL: No shared files mode enabled, IPC is disabled 00:05:50.440 EAL: Bus pci wants IOVA as 'DC' 00:05:50.440 EAL: Bus vdev wants IOVA as 'DC' 00:05:50.440 EAL: Buses did not request a specific IOVA mode. 00:05:50.440 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:50.440 EAL: Selected IOVA mode 'VA' 00:05:50.440 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.440 EAL: Probing VFIO support... 00:05:50.440 EAL: IOMMU type 1 (Type 1) is supported 00:05:50.440 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:50.440 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:50.440 EAL: VFIO support initialized 00:05:50.440 EAL: Ask a virtual area of 0x2e000 bytes 00:05:50.440 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:50.440 EAL: Setting up physically contiguous memory... 00:05:50.440 EAL: Setting maximum number of open files to 524288 00:05:50.440 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:50.440 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:50.440 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:50.440 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.440 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:50.440 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.440 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.440 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:50.440 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:50.440 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.440 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:50.440 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.440 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.440 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:50.440 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:50.440 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.440 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:50.440 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.440 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.440 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:50.440 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:50.440 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.440 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:50.440 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.440 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.440 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:50.440 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:50.440 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:50.440 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.440 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:50.440 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:50.440 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.440 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:50.440 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:50.440 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.440 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:50.440 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:50.440 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.440 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:50.440 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:50.440 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.440 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:50.440 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:50.440 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.440 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:50.440 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:50.440 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.440 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:50.440 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:50.440 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.440 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:50.440 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:50.440 EAL: Hugepages will be freed exactly as allocated. 00:05:50.440 EAL: No shared files mode enabled, IPC is disabled 00:05:50.440 EAL: No shared files mode enabled, IPC is disabled 00:05:50.440 EAL: TSC frequency is ~2700000 KHz 00:05:50.440 EAL: Main lcore 0 is ready (tid=7fe827696a00;cpuset=[0]) 00:05:50.440 EAL: Trying to obtain current memory policy. 00:05:50.440 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.440 EAL: Restoring previous memory policy: 0 00:05:50.440 EAL: request: mp_malloc_sync 00:05:50.440 EAL: No shared files mode enabled, IPC is disabled 00:05:50.440 EAL: Heap on socket 0 was expanded by 2MB 00:05:50.440 EAL: No shared files mode enabled, IPC is disabled 00:05:50.440 EAL: No shared files mode enabled, IPC is disabled 00:05:50.440 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:50.440 EAL: Mem event callback 'spdk:(nil)' registered 00:05:50.440 00:05:50.440 00:05:50.440 CUnit - A unit testing framework for C - Version 2.1-3 00:05:50.440 http://cunit.sourceforge.net/ 00:05:50.440 00:05:50.440 00:05:50.440 Suite: components_suite 00:05:50.440 Test: vtophys_malloc_test ...passed 00:05:50.440 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:50.440 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.440 EAL: Restoring previous memory policy: 4 00:05:50.440 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.440 EAL: request: mp_malloc_sync 00:05:50.440 EAL: No shared files mode enabled, IPC is disabled 00:05:50.440 EAL: Heap on socket 0 was expanded by 4MB 00:05:50.440 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.440 EAL: request: mp_malloc_sync 00:05:50.440 EAL: No shared files mode enabled, IPC is disabled 00:05:50.440 EAL: Heap on socket 0 was shrunk by 4MB 00:05:50.440 EAL: Trying to obtain current memory policy. 00:05:50.440 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.440 EAL: Restoring previous memory policy: 4 00:05:50.440 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.440 EAL: request: mp_malloc_sync 00:05:50.440 EAL: No shared files mode enabled, IPC is disabled 00:05:50.440 EAL: Heap on socket 0 was expanded by 6MB 00:05:50.440 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.440 EAL: request: mp_malloc_sync 00:05:50.440 EAL: No shared files mode enabled, IPC is disabled 00:05:50.440 EAL: Heap on socket 0 was shrunk by 6MB 00:05:50.440 EAL: Trying to obtain current memory policy. 00:05:50.440 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.440 EAL: Restoring previous memory policy: 4 00:05:50.440 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.440 EAL: request: mp_malloc_sync 00:05:50.440 EAL: No shared files mode enabled, IPC is disabled 00:05:50.440 EAL: Heap on socket 0 was expanded by 10MB 00:05:50.440 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.440 EAL: request: mp_malloc_sync 00:05:50.440 EAL: No shared files mode enabled, IPC is disabled 00:05:50.440 EAL: Heap on socket 0 was shrunk by 10MB 00:05:50.440 EAL: Trying to obtain current memory policy. 00:05:50.440 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.440 EAL: Restoring previous memory policy: 4 00:05:50.440 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.440 EAL: request: mp_malloc_sync 00:05:50.440 EAL: No shared files mode enabled, IPC is disabled 00:05:50.440 EAL: Heap on socket 0 was expanded by 18MB 00:05:50.440 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.440 EAL: request: mp_malloc_sync 00:05:50.440 EAL: No shared files mode enabled, IPC is disabled 00:05:50.440 EAL: Heap on socket 0 was shrunk by 18MB 00:05:50.440 EAL: Trying to obtain current memory policy. 00:05:50.440 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.440 EAL: Restoring previous memory policy: 4 00:05:50.440 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.440 EAL: request: mp_malloc_sync 00:05:50.440 EAL: No shared files mode enabled, IPC is disabled 00:05:50.440 EAL: Heap on socket 0 was expanded by 34MB 00:05:50.440 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.440 EAL: request: mp_malloc_sync 00:05:50.440 EAL: No shared files mode enabled, IPC is disabled 00:05:50.440 EAL: Heap on socket 0 was shrunk by 34MB 00:05:50.440 EAL: Trying to obtain current memory policy. 00:05:50.440 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.440 EAL: Restoring previous memory policy: 4 00:05:50.440 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.440 EAL: request: mp_malloc_sync 00:05:50.440 EAL: No shared files mode enabled, IPC is disabled 00:05:50.440 EAL: Heap on socket 0 was expanded by 66MB 00:05:50.698 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.698 EAL: request: mp_malloc_sync 00:05:50.698 EAL: No shared files mode enabled, IPC is disabled 00:05:50.698 EAL: Heap on socket 0 was shrunk by 66MB 00:05:50.698 EAL: Trying to obtain current memory policy. 00:05:50.698 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.698 EAL: Restoring previous memory policy: 4 00:05:50.698 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.698 EAL: request: mp_malloc_sync 00:05:50.698 EAL: No shared files mode enabled, IPC is disabled 00:05:50.698 EAL: Heap on socket 0 was expanded by 130MB 00:05:50.698 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.698 EAL: request: mp_malloc_sync 00:05:50.698 EAL: No shared files mode enabled, IPC is disabled 00:05:50.698 EAL: Heap on socket 0 was shrunk by 130MB 00:05:50.698 EAL: Trying to obtain current memory policy. 00:05:50.698 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.698 EAL: Restoring previous memory policy: 4 00:05:50.698 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.698 EAL: request: mp_malloc_sync 00:05:50.698 EAL: No shared files mode enabled, IPC is disabled 00:05:50.698 EAL: Heap on socket 0 was expanded by 258MB 00:05:50.698 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.956 EAL: request: mp_malloc_sync 00:05:50.956 EAL: No shared files mode enabled, IPC is disabled 00:05:50.956 EAL: Heap on socket 0 was shrunk by 258MB 00:05:50.956 EAL: Trying to obtain current memory policy. 00:05:50.956 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.956 EAL: Restoring previous memory policy: 4 00:05:50.956 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.956 EAL: request: mp_malloc_sync 00:05:50.956 EAL: No shared files mode enabled, IPC is disabled 00:05:50.956 EAL: Heap on socket 0 was expanded by 514MB 00:05:51.215 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.215 EAL: request: mp_malloc_sync 00:05:51.215 EAL: No shared files mode enabled, IPC is disabled 00:05:51.215 EAL: Heap on socket 0 was shrunk by 514MB 00:05:51.215 EAL: Trying to obtain current memory policy. 00:05:51.215 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.472 EAL: Restoring previous memory policy: 4 00:05:51.472 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.472 EAL: request: mp_malloc_sync 00:05:51.472 EAL: No shared files mode enabled, IPC is disabled 00:05:51.472 EAL: Heap on socket 0 was expanded by 1026MB 00:05:51.729 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.987 EAL: request: mp_malloc_sync 00:05:51.987 EAL: No shared files mode enabled, IPC is disabled 00:05:51.987 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:51.987 passed 00:05:51.987 00:05:51.987 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.987 suites 1 1 n/a 0 0 00:05:51.987 tests 2 2 2 0 0 00:05:51.987 asserts 497 497 497 0 n/a 00:05:51.987 00:05:51.987 Elapsed time = 1.365 seconds 00:05:51.987 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.987 EAL: request: mp_malloc_sync 00:05:51.987 EAL: No shared files mode enabled, IPC is disabled 00:05:51.987 EAL: Heap on socket 0 was shrunk by 2MB 00:05:51.987 EAL: No shared files mode enabled, IPC is disabled 00:05:51.987 EAL: No shared files mode enabled, IPC is disabled 00:05:51.987 EAL: No shared files mode enabled, IPC is disabled 00:05:51.987 00:05:51.987 real 0m1.477s 00:05:51.987 user 0m0.842s 00:05:51.987 sys 0m0.607s 00:05:51.987 06:31:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:51.987 06:31:56 -- common/autotest_common.sh@10 -- # set +x 00:05:51.987 ************************************ 00:05:51.987 END TEST env_vtophys 00:05:51.987 ************************************ 00:05:51.987 06:31:56 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:51.987 06:31:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:51.987 06:31:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.987 06:31:56 -- common/autotest_common.sh@10 -- # set +x 00:05:51.987 ************************************ 00:05:51.987 START TEST env_pci 00:05:51.987 ************************************ 00:05:51.987 06:31:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:51.987 00:05:51.987 00:05:51.987 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.987 http://cunit.sourceforge.net/ 00:05:51.987 00:05:51.987 00:05:51.987 Suite: pci 00:05:51.987 Test: pci_hook ...[2024-04-17 06:31:56.536758] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 4057949 has claimed it 00:05:51.987 EAL: Cannot find device (10000:00:01.0) 00:05:51.987 EAL: Failed to attach device on primary process 00:05:51.987 passed 00:05:51.987 00:05:51.987 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.987 suites 1 1 n/a 0 0 00:05:51.987 tests 1 1 1 0 0 00:05:51.987 asserts 25 25 25 0 n/a 00:05:51.987 00:05:51.987 Elapsed time = 0.021 seconds 00:05:51.987 00:05:51.987 real 0m0.033s 00:05:51.987 user 0m0.005s 00:05:51.987 sys 0m0.028s 00:05:51.987 06:31:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:51.987 06:31:56 -- common/autotest_common.sh@10 -- # set +x 00:05:51.987 ************************************ 00:05:51.987 END TEST env_pci 00:05:51.987 ************************************ 00:05:51.987 06:31:56 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:51.987 06:31:56 -- env/env.sh@15 -- # uname 00:05:51.987 06:31:56 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:51.987 06:31:56 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:51.987 06:31:56 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:51.987 06:31:56 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:51.987 06:31:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.987 06:31:56 -- common/autotest_common.sh@10 -- # set +x 00:05:52.246 ************************************ 00:05:52.246 START TEST env_dpdk_post_init 00:05:52.246 ************************************ 00:05:52.246 06:31:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:52.246 EAL: Detected CPU lcores: 48 00:05:52.246 EAL: Detected NUMA nodes: 2 00:05:52.246 EAL: Detected shared linkage of DPDK 00:05:52.246 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:52.246 EAL: Selected IOVA mode 'VA' 00:05:52.246 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.246 EAL: VFIO support initialized 00:05:52.246 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:52.246 EAL: Using IOMMU type 1 (Type 1) 00:05:52.246 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:52.246 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:52.246 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:52.246 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:52.246 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:52.503 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:52.503 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:52.503 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:52.503 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:52.503 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:52.503 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:52.503 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:52.503 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:52.503 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:52.503 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:52.503 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:53.438 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:56.717 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:56.717 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:56.717 Starting DPDK initialization... 00:05:56.717 Starting SPDK post initialization... 00:05:56.717 SPDK NVMe probe 00:05:56.717 Attaching to 0000:88:00.0 00:05:56.717 Attached to 0000:88:00.0 00:05:56.717 Cleaning up... 00:05:56.717 00:05:56.717 real 0m4.404s 00:05:56.717 user 0m3.248s 00:05:56.717 sys 0m0.210s 00:05:56.717 06:32:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:56.717 06:32:01 -- common/autotest_common.sh@10 -- # set +x 00:05:56.717 ************************************ 00:05:56.717 END TEST env_dpdk_post_init 00:05:56.717 ************************************ 00:05:56.717 06:32:01 -- env/env.sh@26 -- # uname 00:05:56.717 06:32:01 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:56.717 06:32:01 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:56.717 06:32:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:56.717 06:32:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.717 06:32:01 -- common/autotest_common.sh@10 -- # set +x 00:05:56.717 ************************************ 00:05:56.717 START TEST env_mem_callbacks 00:05:56.717 ************************************ 00:05:56.717 06:32:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:56.717 EAL: Detected CPU lcores: 48 00:05:56.717 EAL: Detected NUMA nodes: 2 00:05:56.717 EAL: Detected shared linkage of DPDK 00:05:56.717 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:56.717 EAL: Selected IOVA mode 'VA' 00:05:56.717 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.717 EAL: VFIO support initialized 00:05:56.717 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:56.717 00:05:56.717 00:05:56.717 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.717 http://cunit.sourceforge.net/ 00:05:56.717 00:05:56.717 00:05:56.717 Suite: memory 00:05:56.717 Test: test ... 00:05:56.717 register 0x200000200000 2097152 00:05:56.717 malloc 3145728 00:05:56.717 register 0x200000400000 4194304 00:05:56.717 buf 0x200000500000 len 3145728 PASSED 00:05:56.717 malloc 64 00:05:56.717 buf 0x2000004fff40 len 64 PASSED 00:05:56.717 malloc 4194304 00:05:56.717 register 0x200000800000 6291456 00:05:56.717 buf 0x200000a00000 len 4194304 PASSED 00:05:56.717 free 0x200000500000 3145728 00:05:56.717 free 0x2000004fff40 64 00:05:56.717 unregister 0x200000400000 4194304 PASSED 00:05:56.717 free 0x200000a00000 4194304 00:05:56.717 unregister 0x200000800000 6291456 PASSED 00:05:56.717 malloc 8388608 00:05:56.717 register 0x200000400000 10485760 00:05:56.717 buf 0x200000600000 len 8388608 PASSED 00:05:56.717 free 0x200000600000 8388608 00:05:56.717 unregister 0x200000400000 10485760 PASSED 00:05:56.717 passed 00:05:56.717 00:05:56.717 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.717 suites 1 1 n/a 0 0 00:05:56.717 tests 1 1 1 0 0 00:05:56.717 asserts 15 15 15 0 n/a 00:05:56.717 00:05:56.717 Elapsed time = 0.005 seconds 00:05:56.717 00:05:56.717 real 0m0.048s 00:05:56.717 user 0m0.015s 00:05:56.717 sys 0m0.033s 00:05:56.717 06:32:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:56.717 06:32:01 -- common/autotest_common.sh@10 -- # set +x 00:05:56.717 ************************************ 00:05:56.717 END TEST env_mem_callbacks 00:05:56.717 ************************************ 00:05:56.717 00:05:56.717 real 0m6.757s 00:05:56.717 user 0m4.499s 00:05:56.717 sys 0m1.248s 00:05:56.717 06:32:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:56.717 06:32:01 -- common/autotest_common.sh@10 -- # set +x 00:05:56.717 ************************************ 00:05:56.717 END TEST env 00:05:56.717 ************************************ 00:05:56.717 06:32:01 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:56.717 06:32:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:56.717 06:32:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.717 06:32:01 -- common/autotest_common.sh@10 -- # set +x 00:05:56.975 ************************************ 00:05:56.975 START TEST rpc 00:05:56.975 ************************************ 00:05:56.975 06:32:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:56.975 * Looking for test storage... 00:05:56.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:56.975 06:32:01 -- rpc/rpc.sh@65 -- # spdk_pid=4058738 00:05:56.975 06:32:01 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:56.975 06:32:01 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.975 06:32:01 -- rpc/rpc.sh@67 -- # waitforlisten 4058738 00:05:56.975 06:32:01 -- common/autotest_common.sh@817 -- # '[' -z 4058738 ']' 00:05:56.975 06:32:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.975 06:32:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:56.975 06:32:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.975 06:32:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:56.975 06:32:01 -- common/autotest_common.sh@10 -- # set +x 00:05:56.975 [2024-04-17 06:32:01.493831] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:05:56.975 [2024-04-17 06:32:01.493911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4058738 ] 00:05:56.975 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.975 [2024-04-17 06:32:01.550749] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.270 [2024-04-17 06:32:01.635939] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:57.270 [2024-04-17 06:32:01.635993] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 4058738' to capture a snapshot of events at runtime. 00:05:57.270 [2024-04-17 06:32:01.636016] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:57.270 [2024-04-17 06:32:01.636026] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:57.270 [2024-04-17 06:32:01.636036] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid4058738 for offline analysis/debug. 00:05:57.270 [2024-04-17 06:32:01.636064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.532 06:32:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:57.532 06:32:01 -- common/autotest_common.sh@850 -- # return 0 00:05:57.532 06:32:01 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:57.532 06:32:01 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:57.532 06:32:01 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:57.532 06:32:01 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:57.532 06:32:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.532 06:32:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.532 06:32:01 -- common/autotest_common.sh@10 -- # set +x 00:05:57.532 ************************************ 00:05:57.532 START TEST rpc_integrity 00:05:57.532 ************************************ 00:05:57.532 06:32:01 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:05:57.532 06:32:01 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:57.532 06:32:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:57.532 06:32:01 -- common/autotest_common.sh@10 -- # set +x 00:05:57.532 06:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:57.532 06:32:02 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:57.532 06:32:02 -- rpc/rpc.sh@13 -- # jq length 00:05:57.532 06:32:02 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:57.532 06:32:02 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:57.532 06:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:57.532 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:05:57.532 06:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:57.532 06:32:02 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:57.532 06:32:02 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:57.532 06:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:57.532 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:05:57.532 06:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:57.532 06:32:02 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:57.532 { 00:05:57.532 "name": "Malloc0", 00:05:57.532 "aliases": [ 00:05:57.532 "669fca96-3e50-4842-b43e-0f2bd677c7f9" 00:05:57.532 ], 00:05:57.532 "product_name": "Malloc disk", 00:05:57.532 "block_size": 512, 00:05:57.532 "num_blocks": 16384, 00:05:57.532 "uuid": "669fca96-3e50-4842-b43e-0f2bd677c7f9", 00:05:57.532 "assigned_rate_limits": { 00:05:57.532 "rw_ios_per_sec": 0, 00:05:57.532 "rw_mbytes_per_sec": 0, 00:05:57.532 "r_mbytes_per_sec": 0, 00:05:57.532 "w_mbytes_per_sec": 0 00:05:57.532 }, 00:05:57.532 "claimed": false, 00:05:57.532 "zoned": false, 00:05:57.532 "supported_io_types": { 00:05:57.532 "read": true, 00:05:57.532 "write": true, 00:05:57.532 "unmap": true, 00:05:57.532 "write_zeroes": true, 00:05:57.532 "flush": true, 00:05:57.532 "reset": true, 00:05:57.532 "compare": false, 00:05:57.532 "compare_and_write": false, 00:05:57.532 "abort": true, 00:05:57.532 "nvme_admin": false, 00:05:57.532 "nvme_io": false 00:05:57.532 }, 00:05:57.532 "memory_domains": [ 00:05:57.532 { 00:05:57.532 "dma_device_id": "system", 00:05:57.532 "dma_device_type": 1 00:05:57.532 }, 00:05:57.532 { 00:05:57.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.532 "dma_device_type": 2 00:05:57.532 } 00:05:57.532 ], 00:05:57.532 "driver_specific": {} 00:05:57.532 } 00:05:57.532 ]' 00:05:57.532 06:32:02 -- rpc/rpc.sh@17 -- # jq length 00:05:57.532 06:32:02 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:57.532 06:32:02 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:57.533 06:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:57.533 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:05:57.533 [2024-04-17 06:32:02.104039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:57.533 [2024-04-17 06:32:02.104087] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:57.533 [2024-04-17 06:32:02.104113] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x151ec70 00:05:57.533 [2024-04-17 06:32:02.104128] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:57.533 [2024-04-17 06:32:02.105538] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:57.533 [2024-04-17 06:32:02.105568] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:57.533 Passthru0 00:05:57.533 06:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:57.533 06:32:02 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:57.533 06:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:57.533 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:05:57.533 06:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:57.533 06:32:02 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:57.533 { 00:05:57.533 "name": "Malloc0", 00:05:57.533 "aliases": [ 00:05:57.533 "669fca96-3e50-4842-b43e-0f2bd677c7f9" 00:05:57.533 ], 00:05:57.533 "product_name": "Malloc disk", 00:05:57.533 "block_size": 512, 00:05:57.533 "num_blocks": 16384, 00:05:57.533 "uuid": "669fca96-3e50-4842-b43e-0f2bd677c7f9", 00:05:57.533 "assigned_rate_limits": { 00:05:57.533 "rw_ios_per_sec": 0, 00:05:57.533 "rw_mbytes_per_sec": 0, 00:05:57.533 "r_mbytes_per_sec": 0, 00:05:57.533 "w_mbytes_per_sec": 0 00:05:57.533 }, 00:05:57.533 "claimed": true, 00:05:57.533 "claim_type": "exclusive_write", 00:05:57.533 "zoned": false, 00:05:57.533 "supported_io_types": { 00:05:57.533 "read": true, 00:05:57.533 "write": true, 00:05:57.533 "unmap": true, 00:05:57.533 "write_zeroes": true, 00:05:57.533 "flush": true, 00:05:57.533 "reset": true, 00:05:57.533 "compare": false, 00:05:57.533 "compare_and_write": false, 00:05:57.533 "abort": true, 00:05:57.533 "nvme_admin": false, 00:05:57.533 "nvme_io": false 00:05:57.533 }, 00:05:57.533 "memory_domains": [ 00:05:57.533 { 00:05:57.533 "dma_device_id": "system", 00:05:57.533 "dma_device_type": 1 00:05:57.533 }, 00:05:57.533 { 00:05:57.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.533 "dma_device_type": 2 00:05:57.533 } 00:05:57.533 ], 00:05:57.533 "driver_specific": {} 00:05:57.533 }, 00:05:57.533 { 00:05:57.533 "name": "Passthru0", 00:05:57.533 "aliases": [ 00:05:57.533 "48202843-703b-5fff-8cb2-787557c18a30" 00:05:57.533 ], 00:05:57.533 "product_name": "passthru", 00:05:57.533 "block_size": 512, 00:05:57.533 "num_blocks": 16384, 00:05:57.533 "uuid": "48202843-703b-5fff-8cb2-787557c18a30", 00:05:57.533 "assigned_rate_limits": { 00:05:57.533 "rw_ios_per_sec": 0, 00:05:57.533 "rw_mbytes_per_sec": 0, 00:05:57.533 "r_mbytes_per_sec": 0, 00:05:57.533 "w_mbytes_per_sec": 0 00:05:57.533 }, 00:05:57.533 "claimed": false, 00:05:57.533 "zoned": false, 00:05:57.533 "supported_io_types": { 00:05:57.533 "read": true, 00:05:57.533 "write": true, 00:05:57.533 "unmap": true, 00:05:57.533 "write_zeroes": true, 00:05:57.533 "flush": true, 00:05:57.533 "reset": true, 00:05:57.533 "compare": false, 00:05:57.533 "compare_and_write": false, 00:05:57.533 "abort": true, 00:05:57.533 "nvme_admin": false, 00:05:57.533 "nvme_io": false 00:05:57.533 }, 00:05:57.533 "memory_domains": [ 00:05:57.533 { 00:05:57.533 "dma_device_id": "system", 00:05:57.533 "dma_device_type": 1 00:05:57.533 }, 00:05:57.533 { 00:05:57.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.533 "dma_device_type": 2 00:05:57.533 } 00:05:57.533 ], 00:05:57.533 "driver_specific": { 00:05:57.533 "passthru": { 00:05:57.533 "name": "Passthru0", 00:05:57.533 "base_bdev_name": "Malloc0" 00:05:57.533 } 00:05:57.533 } 00:05:57.533 } 00:05:57.533 ]' 00:05:57.533 06:32:02 -- rpc/rpc.sh@21 -- # jq length 00:05:57.790 06:32:02 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:57.790 06:32:02 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:57.790 06:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:57.790 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:05:57.790 06:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:57.790 06:32:02 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:57.790 06:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:57.790 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:05:57.790 06:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:57.790 06:32:02 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:57.790 06:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:57.790 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:05:57.790 06:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:57.790 06:32:02 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:57.790 06:32:02 -- rpc/rpc.sh@26 -- # jq length 00:05:57.790 06:32:02 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:57.790 00:05:57.790 real 0m0.228s 00:05:57.790 user 0m0.140s 00:05:57.790 sys 0m0.025s 00:05:57.790 06:32:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:57.790 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:05:57.790 ************************************ 00:05:57.790 END TEST rpc_integrity 00:05:57.790 ************************************ 00:05:57.790 06:32:02 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:57.790 06:32:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.790 06:32:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.790 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:05:57.790 ************************************ 00:05:57.790 START TEST rpc_plugins 00:05:57.790 ************************************ 00:05:57.790 06:32:02 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:05:57.790 06:32:02 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:57.790 06:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:57.790 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:05:57.790 06:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:57.790 06:32:02 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:57.790 06:32:02 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:57.790 06:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:57.790 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:05:57.790 06:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:57.790 06:32:02 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:57.790 { 00:05:57.790 "name": "Malloc1", 00:05:57.790 "aliases": [ 00:05:57.790 "e11cce63-48f5-45c2-9903-751471723313" 00:05:57.790 ], 00:05:57.790 "product_name": "Malloc disk", 00:05:57.790 "block_size": 4096, 00:05:57.790 "num_blocks": 256, 00:05:57.790 "uuid": "e11cce63-48f5-45c2-9903-751471723313", 00:05:57.790 "assigned_rate_limits": { 00:05:57.790 "rw_ios_per_sec": 0, 00:05:57.790 "rw_mbytes_per_sec": 0, 00:05:57.790 "r_mbytes_per_sec": 0, 00:05:57.790 "w_mbytes_per_sec": 0 00:05:57.790 }, 00:05:57.790 "claimed": false, 00:05:57.790 "zoned": false, 00:05:57.790 "supported_io_types": { 00:05:57.790 "read": true, 00:05:57.790 "write": true, 00:05:57.790 "unmap": true, 00:05:57.790 "write_zeroes": true, 00:05:57.790 "flush": true, 00:05:57.790 "reset": true, 00:05:57.790 "compare": false, 00:05:57.790 "compare_and_write": false, 00:05:57.790 "abort": true, 00:05:57.790 "nvme_admin": false, 00:05:57.790 "nvme_io": false 00:05:57.790 }, 00:05:57.790 "memory_domains": [ 00:05:57.790 { 00:05:57.790 "dma_device_id": "system", 00:05:57.790 "dma_device_type": 1 00:05:57.790 }, 00:05:57.790 { 00:05:57.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.790 "dma_device_type": 2 00:05:57.790 } 00:05:57.790 ], 00:05:57.790 "driver_specific": {} 00:05:57.790 } 00:05:57.790 ]' 00:05:57.790 06:32:02 -- rpc/rpc.sh@32 -- # jq length 00:05:58.047 06:32:02 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:58.047 06:32:02 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:58.047 06:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:58.047 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:05:58.047 06:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:58.047 06:32:02 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:58.047 06:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:58.047 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:05:58.047 06:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:58.047 06:32:02 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:58.047 06:32:02 -- rpc/rpc.sh@36 -- # jq length 00:05:58.047 06:32:02 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:58.047 00:05:58.047 real 0m0.114s 00:05:58.047 user 0m0.069s 00:05:58.047 sys 0m0.014s 00:05:58.047 06:32:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:58.047 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:05:58.047 ************************************ 00:05:58.047 END TEST rpc_plugins 00:05:58.047 ************************************ 00:05:58.047 06:32:02 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:58.047 06:32:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.047 06:32:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.047 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:05:58.047 ************************************ 00:05:58.047 START TEST rpc_trace_cmd_test 00:05:58.047 ************************************ 00:05:58.047 06:32:02 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:05:58.047 06:32:02 -- rpc/rpc.sh@40 -- # local info 00:05:58.047 06:32:02 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:58.047 06:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:58.047 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:05:58.047 06:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:58.047 06:32:02 -- rpc/rpc.sh@42 -- # info='{ 00:05:58.048 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid4058738", 00:05:58.048 "tpoint_group_mask": "0x8", 00:05:58.048 "iscsi_conn": { 00:05:58.048 "mask": "0x2", 00:05:58.048 "tpoint_mask": "0x0" 00:05:58.048 }, 00:05:58.048 "scsi": { 00:05:58.048 "mask": "0x4", 00:05:58.048 "tpoint_mask": "0x0" 00:05:58.048 }, 00:05:58.048 "bdev": { 00:05:58.048 "mask": "0x8", 00:05:58.048 "tpoint_mask": "0xffffffffffffffff" 00:05:58.048 }, 00:05:58.048 "nvmf_rdma": { 00:05:58.048 "mask": "0x10", 00:05:58.048 "tpoint_mask": "0x0" 00:05:58.048 }, 00:05:58.048 "nvmf_tcp": { 00:05:58.048 "mask": "0x20", 00:05:58.048 "tpoint_mask": "0x0" 00:05:58.048 }, 00:05:58.048 "ftl": { 00:05:58.048 "mask": "0x40", 00:05:58.048 "tpoint_mask": "0x0" 00:05:58.048 }, 00:05:58.048 "blobfs": { 00:05:58.048 "mask": "0x80", 00:05:58.048 "tpoint_mask": "0x0" 00:05:58.048 }, 00:05:58.048 "dsa": { 00:05:58.048 "mask": "0x200", 00:05:58.048 "tpoint_mask": "0x0" 00:05:58.048 }, 00:05:58.048 "thread": { 00:05:58.048 "mask": "0x400", 00:05:58.048 "tpoint_mask": "0x0" 00:05:58.048 }, 00:05:58.048 "nvme_pcie": { 00:05:58.048 "mask": "0x800", 00:05:58.048 "tpoint_mask": "0x0" 00:05:58.048 }, 00:05:58.048 "iaa": { 00:05:58.048 "mask": "0x1000", 00:05:58.048 "tpoint_mask": "0x0" 00:05:58.048 }, 00:05:58.048 "nvme_tcp": { 00:05:58.048 "mask": "0x2000", 00:05:58.048 "tpoint_mask": "0x0" 00:05:58.048 }, 00:05:58.048 "bdev_nvme": { 00:05:58.048 "mask": "0x4000", 00:05:58.048 "tpoint_mask": "0x0" 00:05:58.048 }, 00:05:58.048 "sock": { 00:05:58.048 "mask": "0x8000", 00:05:58.048 "tpoint_mask": "0x0" 00:05:58.048 } 00:05:58.048 }' 00:05:58.048 06:32:02 -- rpc/rpc.sh@43 -- # jq length 00:05:58.048 06:32:02 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:58.048 06:32:02 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:58.305 06:32:02 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:58.305 06:32:02 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:58.305 06:32:02 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:58.305 06:32:02 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:58.305 06:32:02 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:58.305 06:32:02 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:58.305 06:32:02 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:58.305 00:05:58.305 real 0m0.197s 00:05:58.305 user 0m0.174s 00:05:58.305 sys 0m0.016s 00:05:58.305 06:32:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:58.305 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:05:58.305 ************************************ 00:05:58.305 END TEST rpc_trace_cmd_test 00:05:58.305 ************************************ 00:05:58.305 06:32:02 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:58.305 06:32:02 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:58.305 06:32:02 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:58.305 06:32:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.305 06:32:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.305 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:05:58.305 ************************************ 00:05:58.305 START TEST rpc_daemon_integrity 00:05:58.305 ************************************ 00:05:58.305 06:32:02 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:05:58.305 06:32:02 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:58.305 06:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:58.305 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:05:58.305 06:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:58.305 06:32:02 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:58.305 06:32:02 -- rpc/rpc.sh@13 -- # jq length 00:05:58.563 06:32:02 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:58.563 06:32:02 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:58.563 06:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:58.563 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:05:58.563 06:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:58.563 06:32:02 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:58.563 06:32:02 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:58.563 06:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:58.563 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:05:58.563 06:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:58.563 06:32:02 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:58.563 { 00:05:58.563 "name": "Malloc2", 00:05:58.563 "aliases": [ 00:05:58.563 "629d1aec-18e9-47da-9b2d-4dfd5b241092" 00:05:58.563 ], 00:05:58.563 "product_name": "Malloc disk", 00:05:58.563 "block_size": 512, 00:05:58.563 "num_blocks": 16384, 00:05:58.563 "uuid": "629d1aec-18e9-47da-9b2d-4dfd5b241092", 00:05:58.563 "assigned_rate_limits": { 00:05:58.563 "rw_ios_per_sec": 0, 00:05:58.563 "rw_mbytes_per_sec": 0, 00:05:58.563 "r_mbytes_per_sec": 0, 00:05:58.563 "w_mbytes_per_sec": 0 00:05:58.563 }, 00:05:58.563 "claimed": false, 00:05:58.563 "zoned": false, 00:05:58.563 "supported_io_types": { 00:05:58.563 "read": true, 00:05:58.563 "write": true, 00:05:58.563 "unmap": true, 00:05:58.563 "write_zeroes": true, 00:05:58.563 "flush": true, 00:05:58.563 "reset": true, 00:05:58.563 "compare": false, 00:05:58.563 "compare_and_write": false, 00:05:58.563 "abort": true, 00:05:58.563 "nvme_admin": false, 00:05:58.563 "nvme_io": false 00:05:58.563 }, 00:05:58.563 "memory_domains": [ 00:05:58.563 { 00:05:58.563 "dma_device_id": "system", 00:05:58.563 "dma_device_type": 1 00:05:58.563 }, 00:05:58.563 { 00:05:58.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.563 "dma_device_type": 2 00:05:58.563 } 00:05:58.563 ], 00:05:58.563 "driver_specific": {} 00:05:58.563 } 00:05:58.563 ]' 00:05:58.563 06:32:02 -- rpc/rpc.sh@17 -- # jq length 00:05:58.563 06:32:02 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:58.563 06:32:02 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:58.563 06:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:58.563 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:05:58.563 [2024-04-17 06:32:02.990781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:58.563 [2024-04-17 06:32:02.990827] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:58.563 [2024-04-17 06:32:02.990856] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x151eea0 00:05:58.563 [2024-04-17 06:32:02.990872] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:58.563 [2024-04-17 06:32:02.992240] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:58.563 [2024-04-17 06:32:02.992268] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:58.563 Passthru0 00:05:58.563 06:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:58.563 06:32:02 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:58.563 06:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:58.563 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:05:58.563 06:32:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:58.563 06:32:03 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:58.563 { 00:05:58.563 "name": "Malloc2", 00:05:58.563 "aliases": [ 00:05:58.563 "629d1aec-18e9-47da-9b2d-4dfd5b241092" 00:05:58.563 ], 00:05:58.563 "product_name": "Malloc disk", 00:05:58.563 "block_size": 512, 00:05:58.563 "num_blocks": 16384, 00:05:58.563 "uuid": "629d1aec-18e9-47da-9b2d-4dfd5b241092", 00:05:58.563 "assigned_rate_limits": { 00:05:58.563 "rw_ios_per_sec": 0, 00:05:58.563 "rw_mbytes_per_sec": 0, 00:05:58.563 "r_mbytes_per_sec": 0, 00:05:58.563 "w_mbytes_per_sec": 0 00:05:58.563 }, 00:05:58.563 "claimed": true, 00:05:58.563 "claim_type": "exclusive_write", 00:05:58.563 "zoned": false, 00:05:58.563 "supported_io_types": { 00:05:58.563 "read": true, 00:05:58.563 "write": true, 00:05:58.563 "unmap": true, 00:05:58.563 "write_zeroes": true, 00:05:58.563 "flush": true, 00:05:58.563 "reset": true, 00:05:58.563 "compare": false, 00:05:58.563 "compare_and_write": false, 00:05:58.563 "abort": true, 00:05:58.563 "nvme_admin": false, 00:05:58.563 "nvme_io": false 00:05:58.563 }, 00:05:58.563 "memory_domains": [ 00:05:58.563 { 00:05:58.563 "dma_device_id": "system", 00:05:58.563 "dma_device_type": 1 00:05:58.563 }, 00:05:58.563 { 00:05:58.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.563 "dma_device_type": 2 00:05:58.564 } 00:05:58.564 ], 00:05:58.564 "driver_specific": {} 00:05:58.564 }, 00:05:58.564 { 00:05:58.564 "name": "Passthru0", 00:05:58.564 "aliases": [ 00:05:58.564 "45a81c35-ebe7-52ff-a58c-d464c4cdde93" 00:05:58.564 ], 00:05:58.564 "product_name": "passthru", 00:05:58.564 "block_size": 512, 00:05:58.564 "num_blocks": 16384, 00:05:58.564 "uuid": "45a81c35-ebe7-52ff-a58c-d464c4cdde93", 00:05:58.564 "assigned_rate_limits": { 00:05:58.564 "rw_ios_per_sec": 0, 00:05:58.564 "rw_mbytes_per_sec": 0, 00:05:58.564 "r_mbytes_per_sec": 0, 00:05:58.564 "w_mbytes_per_sec": 0 00:05:58.564 }, 00:05:58.564 "claimed": false, 00:05:58.564 "zoned": false, 00:05:58.564 "supported_io_types": { 00:05:58.564 "read": true, 00:05:58.564 "write": true, 00:05:58.564 "unmap": true, 00:05:58.564 "write_zeroes": true, 00:05:58.564 "flush": true, 00:05:58.564 "reset": true, 00:05:58.564 "compare": false, 00:05:58.564 "compare_and_write": false, 00:05:58.564 "abort": true, 00:05:58.564 "nvme_admin": false, 00:05:58.564 "nvme_io": false 00:05:58.564 }, 00:05:58.564 "memory_domains": [ 00:05:58.564 { 00:05:58.564 "dma_device_id": "system", 00:05:58.564 "dma_device_type": 1 00:05:58.564 }, 00:05:58.564 { 00:05:58.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.564 "dma_device_type": 2 00:05:58.564 } 00:05:58.564 ], 00:05:58.564 "driver_specific": { 00:05:58.564 "passthru": { 00:05:58.564 "name": "Passthru0", 00:05:58.564 "base_bdev_name": "Malloc2" 00:05:58.564 } 00:05:58.564 } 00:05:58.564 } 00:05:58.564 ]' 00:05:58.564 06:32:03 -- rpc/rpc.sh@21 -- # jq length 00:05:58.564 06:32:03 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:58.564 06:32:03 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:58.564 06:32:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:58.564 06:32:03 -- common/autotest_common.sh@10 -- # set +x 00:05:58.564 06:32:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:58.564 06:32:03 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:58.564 06:32:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:58.564 06:32:03 -- common/autotest_common.sh@10 -- # set +x 00:05:58.564 06:32:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:58.564 06:32:03 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:58.564 06:32:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:58.564 06:32:03 -- common/autotest_common.sh@10 -- # set +x 00:05:58.564 06:32:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:58.564 06:32:03 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:58.564 06:32:03 -- rpc/rpc.sh@26 -- # jq length 00:05:58.564 06:32:03 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:58.564 00:05:58.564 real 0m0.224s 00:05:58.564 user 0m0.146s 00:05:58.564 sys 0m0.020s 00:05:58.564 06:32:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:58.564 06:32:03 -- common/autotest_common.sh@10 -- # set +x 00:05:58.564 ************************************ 00:05:58.564 END TEST rpc_daemon_integrity 00:05:58.564 ************************************ 00:05:58.564 06:32:03 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:58.564 06:32:03 -- rpc/rpc.sh@84 -- # killprocess 4058738 00:05:58.564 06:32:03 -- common/autotest_common.sh@936 -- # '[' -z 4058738 ']' 00:05:58.564 06:32:03 -- common/autotest_common.sh@940 -- # kill -0 4058738 00:05:58.564 06:32:03 -- common/autotest_common.sh@941 -- # uname 00:05:58.564 06:32:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:58.564 06:32:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4058738 00:05:58.564 06:32:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:58.564 06:32:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:58.564 06:32:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4058738' 00:05:58.564 killing process with pid 4058738 00:05:58.564 06:32:03 -- common/autotest_common.sh@955 -- # kill 4058738 00:05:58.564 06:32:03 -- common/autotest_common.sh@960 -- # wait 4058738 00:05:59.130 00:05:59.130 real 0m2.162s 00:05:59.130 user 0m2.735s 00:05:59.130 sys 0m0.714s 00:05:59.130 06:32:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:59.130 06:32:03 -- common/autotest_common.sh@10 -- # set +x 00:05:59.130 ************************************ 00:05:59.130 END TEST rpc 00:05:59.130 ************************************ 00:05:59.130 06:32:03 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:59.130 06:32:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.130 06:32:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.130 06:32:03 -- common/autotest_common.sh@10 -- # set +x 00:05:59.130 ************************************ 00:05:59.130 START TEST skip_rpc 00:05:59.130 ************************************ 00:05:59.130 06:32:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:59.130 * Looking for test storage... 00:05:59.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:59.130 06:32:03 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:59.130 06:32:03 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:59.130 06:32:03 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:59.130 06:32:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.130 06:32:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.130 06:32:03 -- common/autotest_common.sh@10 -- # set +x 00:05:59.388 ************************************ 00:05:59.388 START TEST skip_rpc 00:05:59.388 ************************************ 00:05:59.388 06:32:03 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:05:59.388 06:32:03 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=4059227 00:05:59.388 06:32:03 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:59.388 06:32:03 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.388 06:32:03 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:59.388 [2024-04-17 06:32:03.873645] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:05:59.388 [2024-04-17 06:32:03.873709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4059227 ] 00:05:59.388 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.388 [2024-04-17 06:32:03.934717] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.645 [2024-04-17 06:32:04.024461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.645 [2024-04-17 06:32:04.024569] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:06:04.911 06:32:08 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:04.911 06:32:08 -- common/autotest_common.sh@638 -- # local es=0 00:06:04.911 06:32:08 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:04.911 06:32:08 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:06:04.911 06:32:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:04.911 06:32:08 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:06:04.911 06:32:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:04.911 06:32:08 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:06:04.911 06:32:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:04.911 06:32:08 -- common/autotest_common.sh@10 -- # set +x 00:06:04.911 06:32:08 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:06:04.911 06:32:08 -- common/autotest_common.sh@641 -- # es=1 00:06:04.911 06:32:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:04.911 06:32:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:04.911 06:32:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:04.911 06:32:08 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:04.911 06:32:08 -- rpc/skip_rpc.sh@23 -- # killprocess 4059227 00:06:04.911 06:32:08 -- common/autotest_common.sh@936 -- # '[' -z 4059227 ']' 00:06:04.911 06:32:08 -- common/autotest_common.sh@940 -- # kill -0 4059227 00:06:04.911 06:32:08 -- common/autotest_common.sh@941 -- # uname 00:06:04.911 06:32:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:04.911 06:32:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4059227 00:06:04.911 06:32:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:04.911 06:32:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:04.911 06:32:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4059227' 00:06:04.911 killing process with pid 4059227 00:06:04.911 06:32:08 -- common/autotest_common.sh@955 -- # kill 4059227 00:06:04.911 06:32:08 -- common/autotest_common.sh@960 -- # wait 4059227 00:06:04.911 00:06:04.911 real 0m5.431s 00:06:04.911 user 0m5.108s 00:06:04.911 sys 0m0.327s 00:06:04.911 06:32:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:04.911 06:32:09 -- common/autotest_common.sh@10 -- # set +x 00:06:04.911 ************************************ 00:06:04.911 END TEST skip_rpc 00:06:04.911 ************************************ 00:06:04.911 06:32:09 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:04.911 06:32:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.911 06:32:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.911 06:32:09 -- common/autotest_common.sh@10 -- # set +x 00:06:04.911 ************************************ 00:06:04.911 START TEST skip_rpc_with_json 00:06:04.911 ************************************ 00:06:04.911 06:32:09 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:06:04.911 06:32:09 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:04.911 06:32:09 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=4059919 00:06:04.911 06:32:09 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.911 06:32:09 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.911 06:32:09 -- rpc/skip_rpc.sh@31 -- # waitforlisten 4059919 00:06:04.911 06:32:09 -- common/autotest_common.sh@817 -- # '[' -z 4059919 ']' 00:06:04.911 06:32:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.912 06:32:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:04.912 06:32:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.912 06:32:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:04.912 06:32:09 -- common/autotest_common.sh@10 -- # set +x 00:06:04.912 [2024-04-17 06:32:09.425081] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:06:04.912 [2024-04-17 06:32:09.425159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4059919 ] 00:06:04.912 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.912 [2024-04-17 06:32:09.485946] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.170 [2024-04-17 06:32:09.575221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.429 06:32:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:05.429 06:32:09 -- common/autotest_common.sh@850 -- # return 0 00:06:05.429 06:32:09 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:05.429 06:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:05.429 06:32:09 -- common/autotest_common.sh@10 -- # set +x 00:06:05.429 [2024-04-17 06:32:09.836148] nvmf_rpc.c:2500:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:05.429 request: 00:06:05.429 { 00:06:05.429 "trtype": "tcp", 00:06:05.429 "method": "nvmf_get_transports", 00:06:05.429 "req_id": 1 00:06:05.429 } 00:06:05.429 Got JSON-RPC error response 00:06:05.429 response: 00:06:05.429 { 00:06:05.429 "code": -19, 00:06:05.429 "message": "No such device" 00:06:05.429 } 00:06:05.429 06:32:09 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:06:05.429 06:32:09 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:05.429 06:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:05.429 06:32:09 -- common/autotest_common.sh@10 -- # set +x 00:06:05.429 [2024-04-17 06:32:09.844277] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:05.429 06:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:05.429 06:32:09 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:05.429 06:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:05.429 06:32:09 -- common/autotest_common.sh@10 -- # set +x 00:06:05.429 06:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:05.429 06:32:09 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:05.429 { 00:06:05.429 "subsystems": [ 00:06:05.429 { 00:06:05.429 "subsystem": "vfio_user_target", 00:06:05.429 "config": null 00:06:05.429 }, 00:06:05.429 { 00:06:05.429 "subsystem": "keyring", 00:06:05.429 "config": [] 00:06:05.429 }, 00:06:05.429 { 00:06:05.429 "subsystem": "iobuf", 00:06:05.429 "config": [ 00:06:05.429 { 00:06:05.429 "method": "iobuf_set_options", 00:06:05.429 "params": { 00:06:05.429 "small_pool_count": 8192, 00:06:05.429 "large_pool_count": 1024, 00:06:05.429 "small_bufsize": 8192, 00:06:05.429 "large_bufsize": 135168 00:06:05.429 } 00:06:05.429 } 00:06:05.429 ] 00:06:05.429 }, 00:06:05.429 { 00:06:05.429 "subsystem": "sock", 00:06:05.429 "config": [ 00:06:05.429 { 00:06:05.429 "method": "sock_impl_set_options", 00:06:05.429 "params": { 00:06:05.429 "impl_name": "posix", 00:06:05.429 "recv_buf_size": 2097152, 00:06:05.429 "send_buf_size": 2097152, 00:06:05.429 "enable_recv_pipe": true, 00:06:05.429 "enable_quickack": false, 00:06:05.429 "enable_placement_id": 0, 00:06:05.429 "enable_zerocopy_send_server": true, 00:06:05.429 "enable_zerocopy_send_client": false, 00:06:05.429 "zerocopy_threshold": 0, 00:06:05.429 "tls_version": 0, 00:06:05.429 "enable_ktls": false 00:06:05.429 } 00:06:05.429 }, 00:06:05.429 { 00:06:05.429 "method": "sock_impl_set_options", 00:06:05.429 "params": { 00:06:05.429 "impl_name": "ssl", 00:06:05.429 "recv_buf_size": 4096, 00:06:05.429 "send_buf_size": 4096, 00:06:05.429 "enable_recv_pipe": true, 00:06:05.429 "enable_quickack": false, 00:06:05.429 "enable_placement_id": 0, 00:06:05.429 "enable_zerocopy_send_server": true, 00:06:05.429 "enable_zerocopy_send_client": false, 00:06:05.429 "zerocopy_threshold": 0, 00:06:05.429 "tls_version": 0, 00:06:05.429 "enable_ktls": false 00:06:05.429 } 00:06:05.429 } 00:06:05.429 ] 00:06:05.429 }, 00:06:05.429 { 00:06:05.429 "subsystem": "vmd", 00:06:05.429 "config": [] 00:06:05.429 }, 00:06:05.429 { 00:06:05.429 "subsystem": "accel", 00:06:05.429 "config": [ 00:06:05.429 { 00:06:05.429 "method": "accel_set_options", 00:06:05.429 "params": { 00:06:05.429 "small_cache_size": 128, 00:06:05.429 "large_cache_size": 16, 00:06:05.429 "task_count": 2048, 00:06:05.429 "sequence_count": 2048, 00:06:05.429 "buf_count": 2048 00:06:05.429 } 00:06:05.429 } 00:06:05.429 ] 00:06:05.429 }, 00:06:05.429 { 00:06:05.429 "subsystem": "bdev", 00:06:05.429 "config": [ 00:06:05.429 { 00:06:05.429 "method": "bdev_set_options", 00:06:05.429 "params": { 00:06:05.429 "bdev_io_pool_size": 65535, 00:06:05.429 "bdev_io_cache_size": 256, 00:06:05.429 "bdev_auto_examine": true, 00:06:05.429 "iobuf_small_cache_size": 128, 00:06:05.429 "iobuf_large_cache_size": 16 00:06:05.429 } 00:06:05.429 }, 00:06:05.429 { 00:06:05.429 "method": "bdev_raid_set_options", 00:06:05.429 "params": { 00:06:05.429 "process_window_size_kb": 1024 00:06:05.429 } 00:06:05.429 }, 00:06:05.429 { 00:06:05.429 "method": "bdev_iscsi_set_options", 00:06:05.429 "params": { 00:06:05.429 "timeout_sec": 30 00:06:05.429 } 00:06:05.429 }, 00:06:05.429 { 00:06:05.429 "method": "bdev_nvme_set_options", 00:06:05.429 "params": { 00:06:05.429 "action_on_timeout": "none", 00:06:05.429 "timeout_us": 0, 00:06:05.429 "timeout_admin_us": 0, 00:06:05.429 "keep_alive_timeout_ms": 10000, 00:06:05.429 "arbitration_burst": 0, 00:06:05.429 "low_priority_weight": 0, 00:06:05.429 "medium_priority_weight": 0, 00:06:05.429 "high_priority_weight": 0, 00:06:05.429 "nvme_adminq_poll_period_us": 10000, 00:06:05.429 "nvme_ioq_poll_period_us": 0, 00:06:05.429 "io_queue_requests": 0, 00:06:05.429 "delay_cmd_submit": true, 00:06:05.429 "transport_retry_count": 4, 00:06:05.429 "bdev_retry_count": 3, 00:06:05.429 "transport_ack_timeout": 0, 00:06:05.429 "ctrlr_loss_timeout_sec": 0, 00:06:05.429 "reconnect_delay_sec": 0, 00:06:05.429 "fast_io_fail_timeout_sec": 0, 00:06:05.429 "disable_auto_failback": false, 00:06:05.429 "generate_uuids": false, 00:06:05.429 "transport_tos": 0, 00:06:05.429 "nvme_error_stat": false, 00:06:05.429 "rdma_srq_size": 0, 00:06:05.429 "io_path_stat": false, 00:06:05.429 "allow_accel_sequence": false, 00:06:05.429 "rdma_max_cq_size": 0, 00:06:05.429 "rdma_cm_event_timeout_ms": 0, 00:06:05.429 "dhchap_digests": [ 00:06:05.429 "sha256", 00:06:05.429 "sha384", 00:06:05.429 "sha512" 00:06:05.429 ], 00:06:05.429 "dhchap_dhgroups": [ 00:06:05.429 "null", 00:06:05.429 "ffdhe2048", 00:06:05.429 "ffdhe3072", 00:06:05.429 "ffdhe4096", 00:06:05.429 "ffdhe6144", 00:06:05.429 "ffdhe8192" 00:06:05.429 ] 00:06:05.429 } 00:06:05.429 }, 00:06:05.429 { 00:06:05.429 "method": "bdev_nvme_set_hotplug", 00:06:05.429 "params": { 00:06:05.429 "period_us": 100000, 00:06:05.429 "enable": false 00:06:05.429 } 00:06:05.429 }, 00:06:05.429 { 00:06:05.429 "method": "bdev_wait_for_examine" 00:06:05.429 } 00:06:05.429 ] 00:06:05.429 }, 00:06:05.429 { 00:06:05.429 "subsystem": "scsi", 00:06:05.429 "config": null 00:06:05.429 }, 00:06:05.429 { 00:06:05.429 "subsystem": "scheduler", 00:06:05.429 "config": [ 00:06:05.429 { 00:06:05.429 "method": "framework_set_scheduler", 00:06:05.429 "params": { 00:06:05.429 "name": "static" 00:06:05.429 } 00:06:05.429 } 00:06:05.429 ] 00:06:05.429 }, 00:06:05.429 { 00:06:05.429 "subsystem": "vhost_scsi", 00:06:05.429 "config": [] 00:06:05.429 }, 00:06:05.429 { 00:06:05.429 "subsystem": "vhost_blk", 00:06:05.429 "config": [] 00:06:05.429 }, 00:06:05.429 { 00:06:05.429 "subsystem": "ublk", 00:06:05.429 "config": [] 00:06:05.429 }, 00:06:05.429 { 00:06:05.429 "subsystem": "nbd", 00:06:05.429 "config": [] 00:06:05.429 }, 00:06:05.429 { 00:06:05.429 "subsystem": "nvmf", 00:06:05.429 "config": [ 00:06:05.429 { 00:06:05.429 "method": "nvmf_set_config", 00:06:05.429 "params": { 00:06:05.429 "discovery_filter": "match_any", 00:06:05.429 "admin_cmd_passthru": { 00:06:05.429 "identify_ctrlr": false 00:06:05.429 } 00:06:05.429 } 00:06:05.429 }, 00:06:05.429 { 00:06:05.429 "method": "nvmf_set_max_subsystems", 00:06:05.429 "params": { 00:06:05.429 "max_subsystems": 1024 00:06:05.429 } 00:06:05.429 }, 00:06:05.429 { 00:06:05.429 "method": "nvmf_set_crdt", 00:06:05.429 "params": { 00:06:05.429 "crdt1": 0, 00:06:05.429 "crdt2": 0, 00:06:05.429 "crdt3": 0 00:06:05.429 } 00:06:05.429 }, 00:06:05.429 { 00:06:05.429 "method": "nvmf_create_transport", 00:06:05.429 "params": { 00:06:05.429 "trtype": "TCP", 00:06:05.429 "max_queue_depth": 128, 00:06:05.429 "max_io_qpairs_per_ctrlr": 127, 00:06:05.429 "in_capsule_data_size": 4096, 00:06:05.429 "max_io_size": 131072, 00:06:05.429 "io_unit_size": 131072, 00:06:05.429 "max_aq_depth": 128, 00:06:05.429 "num_shared_buffers": 511, 00:06:05.429 "buf_cache_size": 4294967295, 00:06:05.429 "dif_insert_or_strip": false, 00:06:05.429 "zcopy": false, 00:06:05.429 "c2h_success": true, 00:06:05.429 "sock_priority": 0, 00:06:05.429 "abort_timeout_sec": 1, 00:06:05.429 "ack_timeout": 0 00:06:05.429 } 00:06:05.429 } 00:06:05.429 ] 00:06:05.430 }, 00:06:05.430 { 00:06:05.430 "subsystem": "iscsi", 00:06:05.430 "config": [ 00:06:05.430 { 00:06:05.430 "method": "iscsi_set_options", 00:06:05.430 "params": { 00:06:05.430 "node_base": "iqn.2016-06.io.spdk", 00:06:05.430 "max_sessions": 128, 00:06:05.430 "max_connections_per_session": 2, 00:06:05.430 "max_queue_depth": 64, 00:06:05.430 "default_time2wait": 2, 00:06:05.430 "default_time2retain": 20, 00:06:05.430 "first_burst_length": 8192, 00:06:05.430 "immediate_data": true, 00:06:05.430 "allow_duplicated_isid": false, 00:06:05.430 "error_recovery_level": 0, 00:06:05.430 "nop_timeout": 60, 00:06:05.430 "nop_in_interval": 30, 00:06:05.430 "disable_chap": false, 00:06:05.430 "require_chap": false, 00:06:05.430 "mutual_chap": false, 00:06:05.430 "chap_group": 0, 00:06:05.430 "max_large_datain_per_connection": 64, 00:06:05.430 "max_r2t_per_connection": 4, 00:06:05.430 "pdu_pool_size": 36864, 00:06:05.430 "immediate_data_pool_size": 16384, 00:06:05.430 "data_out_pool_size": 2048 00:06:05.430 } 00:06:05.430 } 00:06:05.430 ] 00:06:05.430 } 00:06:05.430 ] 00:06:05.430 } 00:06:05.430 06:32:09 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:05.430 06:32:09 -- rpc/skip_rpc.sh@40 -- # killprocess 4059919 00:06:05.430 06:32:09 -- common/autotest_common.sh@936 -- # '[' -z 4059919 ']' 00:06:05.430 06:32:09 -- common/autotest_common.sh@940 -- # kill -0 4059919 00:06:05.430 06:32:09 -- common/autotest_common.sh@941 -- # uname 00:06:05.430 06:32:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:05.430 06:32:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4059919 00:06:05.430 06:32:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:05.430 06:32:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:05.430 06:32:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4059919' 00:06:05.430 killing process with pid 4059919 00:06:05.430 06:32:10 -- common/autotest_common.sh@955 -- # kill 4059919 00:06:05.430 06:32:10 -- common/autotest_common.sh@960 -- # wait 4059919 00:06:05.995 06:32:10 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=4060055 00:06:05.995 06:32:10 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:05.995 06:32:10 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:11.257 06:32:15 -- rpc/skip_rpc.sh@50 -- # killprocess 4060055 00:06:11.257 06:32:15 -- common/autotest_common.sh@936 -- # '[' -z 4060055 ']' 00:06:11.257 06:32:15 -- common/autotest_common.sh@940 -- # kill -0 4060055 00:06:11.257 06:32:15 -- common/autotest_common.sh@941 -- # uname 00:06:11.257 06:32:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:11.257 06:32:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4060055 00:06:11.257 06:32:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:11.257 06:32:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:11.257 06:32:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4060055' 00:06:11.257 killing process with pid 4060055 00:06:11.257 06:32:15 -- common/autotest_common.sh@955 -- # kill 4060055 00:06:11.257 06:32:15 -- common/autotest_common.sh@960 -- # wait 4060055 00:06:11.516 06:32:15 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:11.516 06:32:15 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:11.516 00:06:11.516 real 0m6.498s 00:06:11.516 user 0m6.088s 00:06:11.516 sys 0m0.687s 00:06:11.516 06:32:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:11.516 06:32:15 -- common/autotest_common.sh@10 -- # set +x 00:06:11.516 ************************************ 00:06:11.516 END TEST skip_rpc_with_json 00:06:11.516 ************************************ 00:06:11.516 06:32:15 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:11.516 06:32:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:11.516 06:32:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.516 06:32:15 -- common/autotest_common.sh@10 -- # set +x 00:06:11.516 ************************************ 00:06:11.516 START TEST skip_rpc_with_delay 00:06:11.516 ************************************ 00:06:11.516 06:32:15 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:06:11.516 06:32:15 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:11.516 06:32:15 -- common/autotest_common.sh@638 -- # local es=0 00:06:11.516 06:32:15 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:11.516 06:32:15 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:11.516 06:32:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:11.516 06:32:15 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:11.516 06:32:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:11.516 06:32:15 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:11.516 06:32:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:11.516 06:32:15 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:11.516 06:32:15 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:11.516 06:32:15 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:11.516 [2024-04-17 06:32:16.044405] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:11.516 [2024-04-17 06:32:16.044532] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:11.516 06:32:16 -- common/autotest_common.sh@641 -- # es=1 00:06:11.516 06:32:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:11.516 06:32:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:11.516 06:32:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:11.516 00:06:11.516 real 0m0.066s 00:06:11.516 user 0m0.043s 00:06:11.516 sys 0m0.023s 00:06:11.516 06:32:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:11.516 06:32:16 -- common/autotest_common.sh@10 -- # set +x 00:06:11.516 ************************************ 00:06:11.516 END TEST skip_rpc_with_delay 00:06:11.516 ************************************ 00:06:11.516 06:32:16 -- rpc/skip_rpc.sh@77 -- # uname 00:06:11.516 06:32:16 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:11.516 06:32:16 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:11.516 06:32:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:11.516 06:32:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.516 06:32:16 -- common/autotest_common.sh@10 -- # set +x 00:06:11.775 ************************************ 00:06:11.775 START TEST exit_on_failed_rpc_init 00:06:11.775 ************************************ 00:06:11.775 06:32:16 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:06:11.775 06:32:16 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=4060792 00:06:11.775 06:32:16 -- rpc/skip_rpc.sh@63 -- # waitforlisten 4060792 00:06:11.775 06:32:16 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.775 06:32:16 -- common/autotest_common.sh@817 -- # '[' -z 4060792 ']' 00:06:11.775 06:32:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.775 06:32:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:11.775 06:32:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.775 06:32:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:11.775 06:32:16 -- common/autotest_common.sh@10 -- # set +x 00:06:11.775 [2024-04-17 06:32:16.236225] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:06:11.775 [2024-04-17 06:32:16.236315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4060792 ] 00:06:11.775 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.775 [2024-04-17 06:32:16.298122] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.033 [2024-04-17 06:32:16.387758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.292 06:32:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:12.292 06:32:16 -- common/autotest_common.sh@850 -- # return 0 00:06:12.292 06:32:16 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.292 06:32:16 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:12.292 06:32:16 -- common/autotest_common.sh@638 -- # local es=0 00:06:12.292 06:32:16 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:12.292 06:32:16 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:12.292 06:32:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:12.292 06:32:16 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:12.292 06:32:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:12.292 06:32:16 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:12.292 06:32:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:12.292 06:32:16 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:12.292 06:32:16 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:12.292 06:32:16 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:12.292 [2024-04-17 06:32:16.699742] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:06:12.292 [2024-04-17 06:32:16.699828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4060802 ] 00:06:12.292 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.292 [2024-04-17 06:32:16.762193] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.292 [2024-04-17 06:32:16.855434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.292 [2024-04-17 06:32:16.855572] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:12.292 [2024-04-17 06:32:16.855593] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:12.292 [2024-04-17 06:32:16.855608] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:12.550 06:32:16 -- common/autotest_common.sh@641 -- # es=234 00:06:12.550 06:32:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:12.550 06:32:16 -- common/autotest_common.sh@650 -- # es=106 00:06:12.550 06:32:16 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:12.550 06:32:16 -- common/autotest_common.sh@658 -- # es=1 00:06:12.550 06:32:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:12.550 06:32:16 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:12.550 06:32:16 -- rpc/skip_rpc.sh@70 -- # killprocess 4060792 00:06:12.550 06:32:16 -- common/autotest_common.sh@936 -- # '[' -z 4060792 ']' 00:06:12.550 06:32:16 -- common/autotest_common.sh@940 -- # kill -0 4060792 00:06:12.550 06:32:16 -- common/autotest_common.sh@941 -- # uname 00:06:12.550 06:32:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:12.550 06:32:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4060792 00:06:12.550 06:32:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:12.550 06:32:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:12.550 06:32:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4060792' 00:06:12.550 killing process with pid 4060792 00:06:12.550 06:32:16 -- common/autotest_common.sh@955 -- # kill 4060792 00:06:12.550 06:32:16 -- common/autotest_common.sh@960 -- # wait 4060792 00:06:12.808 00:06:12.808 real 0m1.186s 00:06:12.808 user 0m1.279s 00:06:12.808 sys 0m0.467s 00:06:12.808 06:32:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:12.809 06:32:17 -- common/autotest_common.sh@10 -- # set +x 00:06:12.809 ************************************ 00:06:12.809 END TEST exit_on_failed_rpc_init 00:06:12.809 ************************************ 00:06:12.809 06:32:17 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:12.809 00:06:12.809 real 0m13.717s 00:06:12.809 user 0m12.736s 00:06:12.809 sys 0m1.794s 00:06:12.809 06:32:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:12.809 06:32:17 -- common/autotest_common.sh@10 -- # set +x 00:06:12.809 ************************************ 00:06:12.809 END TEST skip_rpc 00:06:12.809 ************************************ 00:06:12.809 06:32:17 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:12.809 06:32:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:12.809 06:32:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.809 06:32:17 -- common/autotest_common.sh@10 -- # set +x 00:06:13.067 ************************************ 00:06:13.067 START TEST rpc_client 00:06:13.067 ************************************ 00:06:13.067 06:32:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:13.067 * Looking for test storage... 00:06:13.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:13.067 06:32:17 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:13.067 OK 00:06:13.067 06:32:17 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:13.067 00:06:13.067 real 0m0.068s 00:06:13.067 user 0m0.027s 00:06:13.067 sys 0m0.045s 00:06:13.067 06:32:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:13.067 06:32:17 -- common/autotest_common.sh@10 -- # set +x 00:06:13.067 ************************************ 00:06:13.067 END TEST rpc_client 00:06:13.067 ************************************ 00:06:13.067 06:32:17 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:13.067 06:32:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:13.067 06:32:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.067 06:32:17 -- common/autotest_common.sh@10 -- # set +x 00:06:13.325 ************************************ 00:06:13.325 START TEST json_config 00:06:13.325 ************************************ 00:06:13.325 06:32:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:13.325 06:32:17 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:13.325 06:32:17 -- nvmf/common.sh@7 -- # uname -s 00:06:13.325 06:32:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:13.325 06:32:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:13.325 06:32:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:13.325 06:32:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:13.325 06:32:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:13.325 06:32:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:13.325 06:32:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:13.325 06:32:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:13.325 06:32:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:13.325 06:32:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:13.325 06:32:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:13.325 06:32:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:13.325 06:32:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:13.325 06:32:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:13.325 06:32:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:13.325 06:32:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:13.325 06:32:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:13.325 06:32:17 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.325 06:32:17 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.325 06:32:17 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.325 06:32:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.325 06:32:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.325 06:32:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.325 06:32:17 -- paths/export.sh@5 -- # export PATH 00:06:13.325 06:32:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.325 06:32:17 -- nvmf/common.sh@47 -- # : 0 00:06:13.325 06:32:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:13.325 06:32:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:13.325 06:32:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:13.325 06:32:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:13.326 06:32:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:13.326 06:32:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:13.326 06:32:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:13.326 06:32:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:13.326 06:32:17 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:13.326 06:32:17 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:13.326 06:32:17 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:13.326 06:32:17 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:13.326 06:32:17 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:13.326 06:32:17 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:13.326 06:32:17 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:13.326 06:32:17 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:13.326 06:32:17 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:13.326 06:32:17 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:13.326 06:32:17 -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:13.326 06:32:17 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:13.326 06:32:17 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:13.326 06:32:17 -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:13.326 06:32:17 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:13.326 06:32:17 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:13.326 INFO: JSON configuration test init 00:06:13.326 06:32:17 -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:13.326 06:32:17 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:13.326 06:32:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:13.326 06:32:17 -- common/autotest_common.sh@10 -- # set +x 00:06:13.326 06:32:17 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:13.326 06:32:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:13.326 06:32:17 -- common/autotest_common.sh@10 -- # set +x 00:06:13.326 06:32:17 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:13.326 06:32:17 -- json_config/common.sh@9 -- # local app=target 00:06:13.326 06:32:17 -- json_config/common.sh@10 -- # shift 00:06:13.326 06:32:17 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:13.326 06:32:17 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:13.326 06:32:17 -- json_config/common.sh@15 -- # local app_extra_params= 00:06:13.326 06:32:17 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:13.326 06:32:17 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:13.326 06:32:17 -- json_config/common.sh@22 -- # app_pid["$app"]=4061060 00:06:13.326 06:32:17 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:13.326 06:32:17 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:13.326 Waiting for target to run... 00:06:13.326 06:32:17 -- json_config/common.sh@25 -- # waitforlisten 4061060 /var/tmp/spdk_tgt.sock 00:06:13.326 06:32:17 -- common/autotest_common.sh@817 -- # '[' -z 4061060 ']' 00:06:13.326 06:32:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:13.326 06:32:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:13.326 06:32:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:13.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:13.326 06:32:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:13.326 06:32:17 -- common/autotest_common.sh@10 -- # set +x 00:06:13.326 [2024-04-17 06:32:17.783243] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:06:13.326 [2024-04-17 06:32:17.783322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4061060 ] 00:06:13.326 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.584 [2024-04-17 06:32:18.123121] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.584 [2024-04-17 06:32:18.185075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.152 06:32:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:14.152 06:32:18 -- common/autotest_common.sh@850 -- # return 0 00:06:14.152 06:32:18 -- json_config/common.sh@26 -- # echo '' 00:06:14.152 00:06:14.152 06:32:18 -- json_config/json_config.sh@269 -- # create_accel_config 00:06:14.152 06:32:18 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:14.152 06:32:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:14.152 06:32:18 -- common/autotest_common.sh@10 -- # set +x 00:06:14.152 06:32:18 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:14.152 06:32:18 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:14.152 06:32:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:14.152 06:32:18 -- common/autotest_common.sh@10 -- # set +x 00:06:14.152 06:32:18 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:14.152 06:32:18 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:14.152 06:32:18 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:17.461 06:32:21 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:17.461 06:32:21 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:17.461 06:32:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:17.461 06:32:21 -- common/autotest_common.sh@10 -- # set +x 00:06:17.461 06:32:21 -- json_config/json_config.sh@45 -- # local ret=0 00:06:17.461 06:32:21 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:17.461 06:32:21 -- json_config/json_config.sh@46 -- # local enabled_types 00:06:17.461 06:32:21 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:17.461 06:32:21 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:17.461 06:32:21 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:17.719 06:32:22 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:17.719 06:32:22 -- json_config/json_config.sh@48 -- # local get_types 00:06:17.719 06:32:22 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:17.719 06:32:22 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:17.719 06:32:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:17.719 06:32:22 -- common/autotest_common.sh@10 -- # set +x 00:06:17.719 06:32:22 -- json_config/json_config.sh@55 -- # return 0 00:06:17.719 06:32:22 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:17.719 06:32:22 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:17.719 06:32:22 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:17.719 06:32:22 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:17.719 06:32:22 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:17.719 06:32:22 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:17.719 06:32:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:17.719 06:32:22 -- common/autotest_common.sh@10 -- # set +x 00:06:17.719 06:32:22 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:17.719 06:32:22 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:17.719 06:32:22 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:17.719 06:32:22 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:17.719 06:32:22 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:17.977 MallocForNvmf0 00:06:17.977 06:32:22 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:17.977 06:32:22 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:18.234 MallocForNvmf1 00:06:18.234 06:32:22 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:18.234 06:32:22 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:18.492 [2024-04-17 06:32:22.850659] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:18.492 06:32:22 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:18.492 06:32:22 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:18.749 06:32:23 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:18.749 06:32:23 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:18.749 06:32:23 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:18.749 06:32:23 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:19.006 06:32:23 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:19.006 06:32:23 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:19.264 [2024-04-17 06:32:23.817839] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:19.264 06:32:23 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:19.264 06:32:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:19.264 06:32:23 -- common/autotest_common.sh@10 -- # set +x 00:06:19.264 06:32:23 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:19.264 06:32:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:19.264 06:32:23 -- common/autotest_common.sh@10 -- # set +x 00:06:19.521 06:32:23 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:19.521 06:32:23 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:19.521 06:32:23 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:19.521 MallocBdevForConfigChangeCheck 00:06:19.521 06:32:24 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:19.521 06:32:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:19.521 06:32:24 -- common/autotest_common.sh@10 -- # set +x 00:06:19.779 06:32:24 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:19.779 06:32:24 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:20.036 06:32:24 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:20.036 INFO: shutting down applications... 00:06:20.036 06:32:24 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:20.036 06:32:24 -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:20.036 06:32:24 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:20.037 06:32:24 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:21.934 Calling clear_iscsi_subsystem 00:06:21.934 Calling clear_nvmf_subsystem 00:06:21.934 Calling clear_nbd_subsystem 00:06:21.934 Calling clear_ublk_subsystem 00:06:21.934 Calling clear_vhost_blk_subsystem 00:06:21.934 Calling clear_vhost_scsi_subsystem 00:06:21.934 Calling clear_bdev_subsystem 00:06:21.934 06:32:26 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:21.934 06:32:26 -- json_config/json_config.sh@343 -- # count=100 00:06:21.934 06:32:26 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:21.934 06:32:26 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:21.934 06:32:26 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:21.934 06:32:26 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:21.934 06:32:26 -- json_config/json_config.sh@345 -- # break 00:06:21.934 06:32:26 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:21.934 06:32:26 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:21.934 06:32:26 -- json_config/common.sh@31 -- # local app=target 00:06:21.934 06:32:26 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:21.934 06:32:26 -- json_config/common.sh@35 -- # [[ -n 4061060 ]] 00:06:21.934 06:32:26 -- json_config/common.sh@38 -- # kill -SIGINT 4061060 00:06:21.934 06:32:26 -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:21.934 06:32:26 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:21.934 06:32:26 -- json_config/common.sh@41 -- # kill -0 4061060 00:06:21.934 06:32:26 -- json_config/common.sh@45 -- # sleep 0.5 00:06:22.501 06:32:27 -- json_config/common.sh@40 -- # (( i++ )) 00:06:22.501 06:32:27 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:22.501 06:32:27 -- json_config/common.sh@41 -- # kill -0 4061060 00:06:22.501 06:32:27 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:22.501 06:32:27 -- json_config/common.sh@43 -- # break 00:06:22.501 06:32:27 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:22.501 06:32:27 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:22.501 SPDK target shutdown done 00:06:22.501 06:32:27 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:22.501 INFO: relaunching applications... 00:06:22.501 06:32:27 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:22.501 06:32:27 -- json_config/common.sh@9 -- # local app=target 00:06:22.501 06:32:27 -- json_config/common.sh@10 -- # shift 00:06:22.501 06:32:27 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:22.501 06:32:27 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:22.501 06:32:27 -- json_config/common.sh@15 -- # local app_extra_params= 00:06:22.501 06:32:27 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:22.502 06:32:27 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:22.502 06:32:27 -- json_config/common.sh@22 -- # app_pid["$app"]=4062254 00:06:22.502 06:32:27 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:22.502 06:32:27 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:22.502 Waiting for target to run... 00:06:22.502 06:32:27 -- json_config/common.sh@25 -- # waitforlisten 4062254 /var/tmp/spdk_tgt.sock 00:06:22.502 06:32:27 -- common/autotest_common.sh@817 -- # '[' -z 4062254 ']' 00:06:22.502 06:32:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:22.502 06:32:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:22.502 06:32:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:22.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:22.502 06:32:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:22.502 06:32:27 -- common/autotest_common.sh@10 -- # set +x 00:06:22.502 [2024-04-17 06:32:27.064056] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:06:22.502 [2024-04-17 06:32:27.064149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4062254 ] 00:06:22.502 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.067 [2024-04-17 06:32:27.565724] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.067 [2024-04-17 06:32:27.645463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.412 [2024-04-17 06:32:30.664188] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:26.412 [2024-04-17 06:32:30.696647] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:26.412 06:32:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:26.412 06:32:30 -- common/autotest_common.sh@850 -- # return 0 00:06:26.412 06:32:30 -- json_config/common.sh@26 -- # echo '' 00:06:26.412 00:06:26.412 06:32:30 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:26.412 06:32:30 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:26.412 INFO: Checking if target configuration is the same... 00:06:26.413 06:32:30 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:26.413 06:32:30 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:26.413 06:32:30 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:26.413 + '[' 2 -ne 2 ']' 00:06:26.413 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:26.413 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:26.413 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:26.413 +++ basename /dev/fd/62 00:06:26.413 ++ mktemp /tmp/62.XXX 00:06:26.413 + tmp_file_1=/tmp/62.DrY 00:06:26.413 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:26.413 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:26.413 + tmp_file_2=/tmp/spdk_tgt_config.json.nAY 00:06:26.413 + ret=0 00:06:26.413 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:26.670 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:26.670 + diff -u /tmp/62.DrY /tmp/spdk_tgt_config.json.nAY 00:06:26.670 + echo 'INFO: JSON config files are the same' 00:06:26.671 INFO: JSON config files are the same 00:06:26.671 + rm /tmp/62.DrY /tmp/spdk_tgt_config.json.nAY 00:06:26.671 + exit 0 00:06:26.671 06:32:31 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:26.671 06:32:31 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:26.671 INFO: changing configuration and checking if this can be detected... 00:06:26.671 06:32:31 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:26.671 06:32:31 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:26.928 06:32:31 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:26.928 06:32:31 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:26.928 06:32:31 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:26.928 + '[' 2 -ne 2 ']' 00:06:26.928 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:26.928 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:26.928 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:26.928 +++ basename /dev/fd/62 00:06:26.928 ++ mktemp /tmp/62.XXX 00:06:26.928 + tmp_file_1=/tmp/62.LML 00:06:26.928 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:26.928 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:26.928 + tmp_file_2=/tmp/spdk_tgt_config.json.wSE 00:06:26.928 + ret=0 00:06:26.928 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:27.186 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:27.445 + diff -u /tmp/62.LML /tmp/spdk_tgt_config.json.wSE 00:06:27.445 + ret=1 00:06:27.445 + echo '=== Start of file: /tmp/62.LML ===' 00:06:27.445 + cat /tmp/62.LML 00:06:27.445 + echo '=== End of file: /tmp/62.LML ===' 00:06:27.445 + echo '' 00:06:27.445 + echo '=== Start of file: /tmp/spdk_tgt_config.json.wSE ===' 00:06:27.445 + cat /tmp/spdk_tgt_config.json.wSE 00:06:27.445 + echo '=== End of file: /tmp/spdk_tgt_config.json.wSE ===' 00:06:27.445 + echo '' 00:06:27.445 + rm /tmp/62.LML /tmp/spdk_tgt_config.json.wSE 00:06:27.445 + exit 1 00:06:27.445 06:32:31 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:27.445 INFO: configuration change detected. 00:06:27.445 06:32:31 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:27.445 06:32:31 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:27.445 06:32:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:27.445 06:32:31 -- common/autotest_common.sh@10 -- # set +x 00:06:27.445 06:32:31 -- json_config/json_config.sh@307 -- # local ret=0 00:06:27.445 06:32:31 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:27.445 06:32:31 -- json_config/json_config.sh@317 -- # [[ -n 4062254 ]] 00:06:27.445 06:32:31 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:27.445 06:32:31 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:27.445 06:32:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:27.445 06:32:31 -- common/autotest_common.sh@10 -- # set +x 00:06:27.445 06:32:31 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:27.445 06:32:31 -- json_config/json_config.sh@193 -- # uname -s 00:06:27.445 06:32:31 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:27.445 06:32:31 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:27.445 06:32:31 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:27.445 06:32:31 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:27.445 06:32:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:27.445 06:32:31 -- common/autotest_common.sh@10 -- # set +x 00:06:27.445 06:32:31 -- json_config/json_config.sh@323 -- # killprocess 4062254 00:06:27.445 06:32:31 -- common/autotest_common.sh@936 -- # '[' -z 4062254 ']' 00:06:27.445 06:32:31 -- common/autotest_common.sh@940 -- # kill -0 4062254 00:06:27.445 06:32:31 -- common/autotest_common.sh@941 -- # uname 00:06:27.445 06:32:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:27.445 06:32:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4062254 00:06:27.445 06:32:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:27.445 06:32:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:27.445 06:32:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4062254' 00:06:27.445 killing process with pid 4062254 00:06:27.445 06:32:31 -- common/autotest_common.sh@955 -- # kill 4062254 00:06:27.445 06:32:31 -- common/autotest_common.sh@960 -- # wait 4062254 00:06:29.345 06:32:33 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:29.345 06:32:33 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:29.345 06:32:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:29.345 06:32:33 -- common/autotest_common.sh@10 -- # set +x 00:06:29.345 06:32:33 -- json_config/json_config.sh@328 -- # return 0 00:06:29.345 06:32:33 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:29.345 INFO: Success 00:06:29.345 00:06:29.345 real 0m15.850s 00:06:29.345 user 0m17.530s 00:06:29.345 sys 0m1.996s 00:06:29.345 06:32:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:29.345 06:32:33 -- common/autotest_common.sh@10 -- # set +x 00:06:29.345 ************************************ 00:06:29.345 END TEST json_config 00:06:29.345 ************************************ 00:06:29.345 06:32:33 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:29.345 06:32:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:29.345 06:32:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.345 06:32:33 -- common/autotest_common.sh@10 -- # set +x 00:06:29.345 ************************************ 00:06:29.345 START TEST json_config_extra_key 00:06:29.345 ************************************ 00:06:29.345 06:32:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:29.345 06:32:33 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:29.345 06:32:33 -- nvmf/common.sh@7 -- # uname -s 00:06:29.345 06:32:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:29.345 06:32:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:29.345 06:32:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:29.345 06:32:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:29.345 06:32:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:29.345 06:32:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:29.345 06:32:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:29.345 06:32:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:29.345 06:32:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:29.345 06:32:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:29.345 06:32:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:29.345 06:32:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:29.345 06:32:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:29.345 06:32:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:29.345 06:32:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:29.345 06:32:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:29.345 06:32:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:29.345 06:32:33 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:29.345 06:32:33 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:29.345 06:32:33 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:29.345 06:32:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.345 06:32:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.345 06:32:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.345 06:32:33 -- paths/export.sh@5 -- # export PATH 00:06:29.345 06:32:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.345 06:32:33 -- nvmf/common.sh@47 -- # : 0 00:06:29.345 06:32:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:29.345 06:32:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:29.345 06:32:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:29.345 06:32:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:29.345 06:32:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:29.345 06:32:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:29.345 06:32:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:29.345 06:32:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:29.345 06:32:33 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:29.345 06:32:33 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:29.345 06:32:33 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:29.345 06:32:33 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:29.345 06:32:33 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:29.345 06:32:33 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:29.345 06:32:33 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:29.345 06:32:33 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:29.345 06:32:33 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:29.345 06:32:33 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:29.345 06:32:33 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:29.345 INFO: launching applications... 00:06:29.345 06:32:33 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:29.345 06:32:33 -- json_config/common.sh@9 -- # local app=target 00:06:29.345 06:32:33 -- json_config/common.sh@10 -- # shift 00:06:29.345 06:32:33 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:29.345 06:32:33 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:29.345 06:32:33 -- json_config/common.sh@15 -- # local app_extra_params= 00:06:29.345 06:32:33 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:29.345 06:32:33 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:29.345 06:32:33 -- json_config/common.sh@22 -- # app_pid["$app"]=4063180 00:06:29.345 06:32:33 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:29.345 06:32:33 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:29.345 Waiting for target to run... 00:06:29.345 06:32:33 -- json_config/common.sh@25 -- # waitforlisten 4063180 /var/tmp/spdk_tgt.sock 00:06:29.345 06:32:33 -- common/autotest_common.sh@817 -- # '[' -z 4063180 ']' 00:06:29.345 06:32:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:29.345 06:32:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:29.345 06:32:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:29.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:29.345 06:32:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:29.345 06:32:33 -- common/autotest_common.sh@10 -- # set +x 00:06:29.345 [2024-04-17 06:32:33.758941] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:06:29.345 [2024-04-17 06:32:33.759023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4063180 ] 00:06:29.345 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.604 [2024-04-17 06:32:34.113246] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.604 [2024-04-17 06:32:34.174756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.174 06:32:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:30.174 06:32:34 -- common/autotest_common.sh@850 -- # return 0 00:06:30.174 06:32:34 -- json_config/common.sh@26 -- # echo '' 00:06:30.174 00:06:30.174 06:32:34 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:30.174 INFO: shutting down applications... 00:06:30.174 06:32:34 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:30.174 06:32:34 -- json_config/common.sh@31 -- # local app=target 00:06:30.174 06:32:34 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:30.174 06:32:34 -- json_config/common.sh@35 -- # [[ -n 4063180 ]] 00:06:30.174 06:32:34 -- json_config/common.sh@38 -- # kill -SIGINT 4063180 00:06:30.174 06:32:34 -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:30.174 06:32:34 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:30.174 06:32:34 -- json_config/common.sh@41 -- # kill -0 4063180 00:06:30.174 06:32:34 -- json_config/common.sh@45 -- # sleep 0.5 00:06:30.740 06:32:35 -- json_config/common.sh@40 -- # (( i++ )) 00:06:30.740 06:32:35 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:30.740 06:32:35 -- json_config/common.sh@41 -- # kill -0 4063180 00:06:30.740 06:32:35 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:30.740 06:32:35 -- json_config/common.sh@43 -- # break 00:06:30.740 06:32:35 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:30.740 06:32:35 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:30.740 SPDK target shutdown done 00:06:30.740 06:32:35 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:30.740 Success 00:06:30.740 00:06:30.740 real 0m1.554s 00:06:30.740 user 0m1.505s 00:06:30.740 sys 0m0.441s 00:06:30.740 06:32:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:30.740 06:32:35 -- common/autotest_common.sh@10 -- # set +x 00:06:30.740 ************************************ 00:06:30.740 END TEST json_config_extra_key 00:06:30.740 ************************************ 00:06:30.740 06:32:35 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:30.740 06:32:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:30.740 06:32:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.740 06:32:35 -- common/autotest_common.sh@10 -- # set +x 00:06:30.740 ************************************ 00:06:30.740 START TEST alias_rpc 00:06:30.740 ************************************ 00:06:30.740 06:32:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:30.999 * Looking for test storage... 00:06:30.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:30.999 06:32:35 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:30.999 06:32:35 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=4063486 00:06:30.999 06:32:35 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:30.999 06:32:35 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 4063486 00:06:30.999 06:32:35 -- common/autotest_common.sh@817 -- # '[' -z 4063486 ']' 00:06:30.999 06:32:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.999 06:32:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:30.999 06:32:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.999 06:32:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:30.999 06:32:35 -- common/autotest_common.sh@10 -- # set +x 00:06:30.999 [2024-04-17 06:32:35.432027] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:06:30.999 [2024-04-17 06:32:35.432124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4063486 ] 00:06:30.999 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.999 [2024-04-17 06:32:35.489623] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.999 [2024-04-17 06:32:35.571409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.257 06:32:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:31.257 06:32:35 -- common/autotest_common.sh@850 -- # return 0 00:06:31.257 06:32:35 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:31.515 06:32:36 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 4063486 00:06:31.515 06:32:36 -- common/autotest_common.sh@936 -- # '[' -z 4063486 ']' 00:06:31.515 06:32:36 -- common/autotest_common.sh@940 -- # kill -0 4063486 00:06:31.515 06:32:36 -- common/autotest_common.sh@941 -- # uname 00:06:31.515 06:32:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:31.515 06:32:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4063486 00:06:31.774 06:32:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:31.774 06:32:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:31.774 06:32:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4063486' 00:06:31.774 killing process with pid 4063486 00:06:31.774 06:32:36 -- common/autotest_common.sh@955 -- # kill 4063486 00:06:31.774 06:32:36 -- common/autotest_common.sh@960 -- # wait 4063486 00:06:32.033 00:06:32.033 real 0m1.218s 00:06:32.033 user 0m1.294s 00:06:32.033 sys 0m0.416s 00:06:32.033 06:32:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:32.033 06:32:36 -- common/autotest_common.sh@10 -- # set +x 00:06:32.033 ************************************ 00:06:32.033 END TEST alias_rpc 00:06:32.033 ************************************ 00:06:32.033 06:32:36 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:06:32.033 06:32:36 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:32.033 06:32:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:32.033 06:32:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.033 06:32:36 -- common/autotest_common.sh@10 -- # set +x 00:06:32.291 ************************************ 00:06:32.291 START TEST spdkcli_tcp 00:06:32.291 ************************************ 00:06:32.291 06:32:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:32.291 * Looking for test storage... 00:06:32.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:32.291 06:32:36 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:32.291 06:32:36 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:32.291 06:32:36 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:32.291 06:32:36 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:32.291 06:32:36 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:32.291 06:32:36 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:32.291 06:32:36 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:32.291 06:32:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:32.291 06:32:36 -- common/autotest_common.sh@10 -- # set +x 00:06:32.291 06:32:36 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=4063681 00:06:32.291 06:32:36 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:32.291 06:32:36 -- spdkcli/tcp.sh@27 -- # waitforlisten 4063681 00:06:32.291 06:32:36 -- common/autotest_common.sh@817 -- # '[' -z 4063681 ']' 00:06:32.291 06:32:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.291 06:32:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:32.291 06:32:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.291 06:32:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:32.291 06:32:36 -- common/autotest_common.sh@10 -- # set +x 00:06:32.291 [2024-04-17 06:32:36.780999] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:06:32.291 [2024-04-17 06:32:36.781074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4063681 ] 00:06:32.291 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.291 [2024-04-17 06:32:36.838397] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.549 [2024-04-17 06:32:36.923412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.549 [2024-04-17 06:32:36.923415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.807 06:32:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:32.807 06:32:37 -- common/autotest_common.sh@850 -- # return 0 00:06:32.807 06:32:37 -- spdkcli/tcp.sh@31 -- # socat_pid=4063815 00:06:32.807 06:32:37 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:32.807 06:32:37 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:32.807 [ 00:06:32.807 "bdev_malloc_delete", 00:06:32.807 "bdev_malloc_create", 00:06:32.807 "bdev_null_resize", 00:06:32.807 "bdev_null_delete", 00:06:32.807 "bdev_null_create", 00:06:32.807 "bdev_nvme_cuse_unregister", 00:06:32.807 "bdev_nvme_cuse_register", 00:06:32.807 "bdev_opal_new_user", 00:06:32.807 "bdev_opal_set_lock_state", 00:06:32.807 "bdev_opal_delete", 00:06:32.807 "bdev_opal_get_info", 00:06:32.807 "bdev_opal_create", 00:06:32.807 "bdev_nvme_opal_revert", 00:06:32.807 "bdev_nvme_opal_init", 00:06:32.807 "bdev_nvme_send_cmd", 00:06:32.807 "bdev_nvme_get_path_iostat", 00:06:32.807 "bdev_nvme_get_mdns_discovery_info", 00:06:32.807 "bdev_nvme_stop_mdns_discovery", 00:06:32.807 "bdev_nvme_start_mdns_discovery", 00:06:32.807 "bdev_nvme_set_multipath_policy", 00:06:32.807 "bdev_nvme_set_preferred_path", 00:06:32.807 "bdev_nvme_get_io_paths", 00:06:32.807 "bdev_nvme_remove_error_injection", 00:06:32.807 "bdev_nvme_add_error_injection", 00:06:32.807 "bdev_nvme_get_discovery_info", 00:06:32.807 "bdev_nvme_stop_discovery", 00:06:32.807 "bdev_nvme_start_discovery", 00:06:32.807 "bdev_nvme_get_controller_health_info", 00:06:32.807 "bdev_nvme_disable_controller", 00:06:32.807 "bdev_nvme_enable_controller", 00:06:32.807 "bdev_nvme_reset_controller", 00:06:32.807 "bdev_nvme_get_transport_statistics", 00:06:32.807 "bdev_nvme_apply_firmware", 00:06:32.807 "bdev_nvme_detach_controller", 00:06:32.807 "bdev_nvme_get_controllers", 00:06:32.807 "bdev_nvme_attach_controller", 00:06:32.807 "bdev_nvme_set_hotplug", 00:06:32.807 "bdev_nvme_set_options", 00:06:32.807 "bdev_passthru_delete", 00:06:32.807 "bdev_passthru_create", 00:06:32.807 "bdev_lvol_grow_lvstore", 00:06:32.807 "bdev_lvol_get_lvols", 00:06:32.807 "bdev_lvol_get_lvstores", 00:06:32.807 "bdev_lvol_delete", 00:06:32.807 "bdev_lvol_set_read_only", 00:06:32.807 "bdev_lvol_resize", 00:06:32.807 "bdev_lvol_decouple_parent", 00:06:32.807 "bdev_lvol_inflate", 00:06:32.807 "bdev_lvol_rename", 00:06:32.807 "bdev_lvol_clone_bdev", 00:06:32.807 "bdev_lvol_clone", 00:06:32.807 "bdev_lvol_snapshot", 00:06:32.807 "bdev_lvol_create", 00:06:32.807 "bdev_lvol_delete_lvstore", 00:06:32.807 "bdev_lvol_rename_lvstore", 00:06:32.807 "bdev_lvol_create_lvstore", 00:06:32.807 "bdev_raid_set_options", 00:06:32.807 "bdev_raid_remove_base_bdev", 00:06:32.807 "bdev_raid_add_base_bdev", 00:06:32.807 "bdev_raid_delete", 00:06:32.807 "bdev_raid_create", 00:06:32.807 "bdev_raid_get_bdevs", 00:06:32.807 "bdev_error_inject_error", 00:06:32.807 "bdev_error_delete", 00:06:32.807 "bdev_error_create", 00:06:32.807 "bdev_split_delete", 00:06:32.807 "bdev_split_create", 00:06:32.807 "bdev_delay_delete", 00:06:32.807 "bdev_delay_create", 00:06:32.807 "bdev_delay_update_latency", 00:06:32.807 "bdev_zone_block_delete", 00:06:32.807 "bdev_zone_block_create", 00:06:32.807 "blobfs_create", 00:06:32.807 "blobfs_detect", 00:06:32.807 "blobfs_set_cache_size", 00:06:32.807 "bdev_aio_delete", 00:06:32.807 "bdev_aio_rescan", 00:06:32.807 "bdev_aio_create", 00:06:32.807 "bdev_ftl_set_property", 00:06:32.807 "bdev_ftl_get_properties", 00:06:32.807 "bdev_ftl_get_stats", 00:06:32.807 "bdev_ftl_unmap", 00:06:32.807 "bdev_ftl_unload", 00:06:32.807 "bdev_ftl_delete", 00:06:32.807 "bdev_ftl_load", 00:06:32.807 "bdev_ftl_create", 00:06:32.807 "bdev_virtio_attach_controller", 00:06:32.807 "bdev_virtio_scsi_get_devices", 00:06:32.807 "bdev_virtio_detach_controller", 00:06:32.807 "bdev_virtio_blk_set_hotplug", 00:06:32.807 "bdev_iscsi_delete", 00:06:32.807 "bdev_iscsi_create", 00:06:32.807 "bdev_iscsi_set_options", 00:06:32.807 "accel_error_inject_error", 00:06:32.807 "ioat_scan_accel_module", 00:06:32.807 "dsa_scan_accel_module", 00:06:32.807 "iaa_scan_accel_module", 00:06:32.807 "vfu_virtio_create_scsi_endpoint", 00:06:32.807 "vfu_virtio_scsi_remove_target", 00:06:32.807 "vfu_virtio_scsi_add_target", 00:06:32.807 "vfu_virtio_create_blk_endpoint", 00:06:32.807 "vfu_virtio_delete_endpoint", 00:06:32.807 "keyring_file_remove_key", 00:06:32.807 "keyring_file_add_key", 00:06:32.807 "iscsi_set_options", 00:06:32.807 "iscsi_get_auth_groups", 00:06:32.807 "iscsi_auth_group_remove_secret", 00:06:32.807 "iscsi_auth_group_add_secret", 00:06:32.807 "iscsi_delete_auth_group", 00:06:32.807 "iscsi_create_auth_group", 00:06:32.807 "iscsi_set_discovery_auth", 00:06:32.807 "iscsi_get_options", 00:06:32.807 "iscsi_target_node_request_logout", 00:06:32.807 "iscsi_target_node_set_redirect", 00:06:32.807 "iscsi_target_node_set_auth", 00:06:32.807 "iscsi_target_node_add_lun", 00:06:32.807 "iscsi_get_stats", 00:06:32.807 "iscsi_get_connections", 00:06:32.807 "iscsi_portal_group_set_auth", 00:06:32.807 "iscsi_start_portal_group", 00:06:32.807 "iscsi_delete_portal_group", 00:06:32.807 "iscsi_create_portal_group", 00:06:32.807 "iscsi_get_portal_groups", 00:06:32.807 "iscsi_delete_target_node", 00:06:32.807 "iscsi_target_node_remove_pg_ig_maps", 00:06:32.807 "iscsi_target_node_add_pg_ig_maps", 00:06:32.807 "iscsi_create_target_node", 00:06:32.807 "iscsi_get_target_nodes", 00:06:32.807 "iscsi_delete_initiator_group", 00:06:32.807 "iscsi_initiator_group_remove_initiators", 00:06:32.807 "iscsi_initiator_group_add_initiators", 00:06:32.807 "iscsi_create_initiator_group", 00:06:32.807 "iscsi_get_initiator_groups", 00:06:32.807 "nvmf_set_crdt", 00:06:32.807 "nvmf_set_config", 00:06:32.807 "nvmf_set_max_subsystems", 00:06:32.807 "nvmf_subsystem_get_listeners", 00:06:32.807 "nvmf_subsystem_get_qpairs", 00:06:32.807 "nvmf_subsystem_get_controllers", 00:06:32.807 "nvmf_get_stats", 00:06:32.807 "nvmf_get_transports", 00:06:32.807 "nvmf_create_transport", 00:06:32.807 "nvmf_get_targets", 00:06:32.807 "nvmf_delete_target", 00:06:32.807 "nvmf_create_target", 00:06:32.807 "nvmf_subsystem_allow_any_host", 00:06:32.807 "nvmf_subsystem_remove_host", 00:06:32.807 "nvmf_subsystem_add_host", 00:06:32.807 "nvmf_ns_remove_host", 00:06:32.807 "nvmf_ns_add_host", 00:06:32.807 "nvmf_subsystem_remove_ns", 00:06:32.807 "nvmf_subsystem_add_ns", 00:06:32.807 "nvmf_subsystem_listener_set_ana_state", 00:06:32.807 "nvmf_discovery_get_referrals", 00:06:32.807 "nvmf_discovery_remove_referral", 00:06:32.807 "nvmf_discovery_add_referral", 00:06:32.807 "nvmf_subsystem_remove_listener", 00:06:32.807 "nvmf_subsystem_add_listener", 00:06:32.807 "nvmf_delete_subsystem", 00:06:32.807 "nvmf_create_subsystem", 00:06:32.807 "nvmf_get_subsystems", 00:06:32.807 "env_dpdk_get_mem_stats", 00:06:32.807 "nbd_get_disks", 00:06:32.807 "nbd_stop_disk", 00:06:32.807 "nbd_start_disk", 00:06:32.807 "ublk_recover_disk", 00:06:32.807 "ublk_get_disks", 00:06:32.807 "ublk_stop_disk", 00:06:32.807 "ublk_start_disk", 00:06:32.807 "ublk_destroy_target", 00:06:32.807 "ublk_create_target", 00:06:32.807 "virtio_blk_create_transport", 00:06:32.807 "virtio_blk_get_transports", 00:06:32.807 "vhost_controller_set_coalescing", 00:06:32.807 "vhost_get_controllers", 00:06:32.807 "vhost_delete_controller", 00:06:32.807 "vhost_create_blk_controller", 00:06:32.807 "vhost_scsi_controller_remove_target", 00:06:32.807 "vhost_scsi_controller_add_target", 00:06:32.807 "vhost_start_scsi_controller", 00:06:32.807 "vhost_create_scsi_controller", 00:06:32.807 "thread_set_cpumask", 00:06:32.807 "framework_get_scheduler", 00:06:32.807 "framework_set_scheduler", 00:06:32.807 "framework_get_reactors", 00:06:32.807 "thread_get_io_channels", 00:06:32.807 "thread_get_pollers", 00:06:32.807 "thread_get_stats", 00:06:32.807 "framework_monitor_context_switch", 00:06:32.807 "spdk_kill_instance", 00:06:32.807 "log_enable_timestamps", 00:06:32.807 "log_get_flags", 00:06:32.807 "log_clear_flag", 00:06:32.807 "log_set_flag", 00:06:32.807 "log_get_level", 00:06:32.807 "log_set_level", 00:06:32.807 "log_get_print_level", 00:06:32.807 "log_set_print_level", 00:06:32.807 "framework_enable_cpumask_locks", 00:06:32.807 "framework_disable_cpumask_locks", 00:06:32.807 "framework_wait_init", 00:06:32.807 "framework_start_init", 00:06:32.807 "scsi_get_devices", 00:06:32.807 "bdev_get_histogram", 00:06:32.807 "bdev_enable_histogram", 00:06:32.807 "bdev_set_qos_limit", 00:06:32.807 "bdev_set_qd_sampling_period", 00:06:32.807 "bdev_get_bdevs", 00:06:32.807 "bdev_reset_iostat", 00:06:32.807 "bdev_get_iostat", 00:06:32.807 "bdev_examine", 00:06:32.807 "bdev_wait_for_examine", 00:06:32.807 "bdev_set_options", 00:06:32.807 "notify_get_notifications", 00:06:32.807 "notify_get_types", 00:06:32.807 "accel_get_stats", 00:06:32.807 "accel_set_options", 00:06:32.807 "accel_set_driver", 00:06:32.807 "accel_crypto_key_destroy", 00:06:32.807 "accel_crypto_keys_get", 00:06:32.807 "accel_crypto_key_create", 00:06:32.808 "accel_assign_opc", 00:06:32.808 "accel_get_module_info", 00:06:32.808 "accel_get_opc_assignments", 00:06:32.808 "vmd_rescan", 00:06:32.808 "vmd_remove_device", 00:06:32.808 "vmd_enable", 00:06:32.808 "sock_set_default_impl", 00:06:32.808 "sock_impl_set_options", 00:06:32.808 "sock_impl_get_options", 00:06:32.808 "iobuf_get_stats", 00:06:32.808 "iobuf_set_options", 00:06:32.808 "keyring_get_keys", 00:06:32.808 "framework_get_pci_devices", 00:06:32.808 "framework_get_config", 00:06:32.808 "framework_get_subsystems", 00:06:32.808 "vfu_tgt_set_base_path", 00:06:32.808 "trace_get_info", 00:06:32.808 "trace_get_tpoint_group_mask", 00:06:32.808 "trace_disable_tpoint_group", 00:06:32.808 "trace_enable_tpoint_group", 00:06:32.808 "trace_clear_tpoint_mask", 00:06:32.808 "trace_set_tpoint_mask", 00:06:32.808 "spdk_get_version", 00:06:32.808 "rpc_get_methods" 00:06:32.808 ] 00:06:32.808 06:32:37 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:32.808 06:32:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:32.808 06:32:37 -- common/autotest_common.sh@10 -- # set +x 00:06:33.066 06:32:37 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:33.066 06:32:37 -- spdkcli/tcp.sh@38 -- # killprocess 4063681 00:06:33.066 06:32:37 -- common/autotest_common.sh@936 -- # '[' -z 4063681 ']' 00:06:33.066 06:32:37 -- common/autotest_common.sh@940 -- # kill -0 4063681 00:06:33.066 06:32:37 -- common/autotest_common.sh@941 -- # uname 00:06:33.066 06:32:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:33.066 06:32:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4063681 00:06:33.066 06:32:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:33.066 06:32:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:33.066 06:32:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4063681' 00:06:33.066 killing process with pid 4063681 00:06:33.066 06:32:37 -- common/autotest_common.sh@955 -- # kill 4063681 00:06:33.066 06:32:37 -- common/autotest_common.sh@960 -- # wait 4063681 00:06:33.326 00:06:33.326 real 0m1.199s 00:06:33.326 user 0m2.143s 00:06:33.326 sys 0m0.424s 00:06:33.326 06:32:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:33.326 06:32:37 -- common/autotest_common.sh@10 -- # set +x 00:06:33.326 ************************************ 00:06:33.326 END TEST spdkcli_tcp 00:06:33.326 ************************************ 00:06:33.326 06:32:37 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:33.326 06:32:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:33.326 06:32:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.326 06:32:37 -- common/autotest_common.sh@10 -- # set +x 00:06:33.584 ************************************ 00:06:33.584 START TEST dpdk_mem_utility 00:06:33.584 ************************************ 00:06:33.584 06:32:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:33.584 * Looking for test storage... 00:06:33.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:33.584 06:32:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:33.584 06:32:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=4063896 00:06:33.584 06:32:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:33.584 06:32:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 4063896 00:06:33.584 06:32:38 -- common/autotest_common.sh@817 -- # '[' -z 4063896 ']' 00:06:33.585 06:32:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.585 06:32:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:33.585 06:32:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.585 06:32:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:33.585 06:32:38 -- common/autotest_common.sh@10 -- # set +x 00:06:33.585 [2024-04-17 06:32:38.093598] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:06:33.585 [2024-04-17 06:32:38.093697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4063896 ] 00:06:33.585 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.585 [2024-04-17 06:32:38.152485] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.843 [2024-04-17 06:32:38.246040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.102 06:32:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:34.102 06:32:38 -- common/autotest_common.sh@850 -- # return 0 00:06:34.102 06:32:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:34.102 06:32:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:34.102 06:32:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.102 06:32:38 -- common/autotest_common.sh@10 -- # set +x 00:06:34.102 { 00:06:34.102 "filename": "/tmp/spdk_mem_dump.txt" 00:06:34.102 } 00:06:34.102 06:32:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.102 06:32:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:34.102 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:34.102 1 heaps totaling size 814.000000 MiB 00:06:34.102 size: 814.000000 MiB heap id: 0 00:06:34.102 end heaps---------- 00:06:34.102 8 mempools totaling size 598.116089 MiB 00:06:34.102 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:34.102 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:34.102 size: 84.521057 MiB name: bdev_io_4063896 00:06:34.102 size: 51.011292 MiB name: evtpool_4063896 00:06:34.102 size: 50.003479 MiB name: msgpool_4063896 00:06:34.102 size: 21.763794 MiB name: PDU_Pool 00:06:34.102 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:34.102 size: 0.026123 MiB name: Session_Pool 00:06:34.102 end mempools------- 00:06:34.102 6 memzones totaling size 4.142822 MiB 00:06:34.102 size: 1.000366 MiB name: RG_ring_0_4063896 00:06:34.102 size: 1.000366 MiB name: RG_ring_1_4063896 00:06:34.102 size: 1.000366 MiB name: RG_ring_4_4063896 00:06:34.102 size: 1.000366 MiB name: RG_ring_5_4063896 00:06:34.102 size: 0.125366 MiB name: RG_ring_2_4063896 00:06:34.102 size: 0.015991 MiB name: RG_ring_3_4063896 00:06:34.102 end memzones------- 00:06:34.102 06:32:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:34.102 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:34.102 list of free elements. size: 12.519348 MiB 00:06:34.102 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:34.102 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:34.102 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:34.102 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:34.102 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:34.102 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:34.102 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:34.102 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:34.102 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:34.102 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:34.102 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:34.102 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:34.102 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:34.102 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:34.102 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:34.102 list of standard malloc elements. size: 199.218079 MiB 00:06:34.102 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:34.102 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:34.102 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:34.102 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:34.102 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:34.102 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:34.102 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:34.102 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:34.102 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:34.102 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:34.102 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:34.102 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:34.102 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:34.102 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:34.102 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:34.102 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:34.103 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:34.103 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:34.103 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:34.103 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:34.103 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:34.103 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:34.103 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:34.103 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:34.103 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:34.103 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:34.103 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:34.103 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:34.103 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:34.103 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:34.103 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:34.103 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:34.103 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:34.103 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:34.103 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:34.103 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:34.103 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:34.103 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:34.103 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:34.103 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:34.103 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:34.103 list of memzone associated elements. size: 602.262573 MiB 00:06:34.103 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:34.103 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:34.103 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:34.103 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:34.103 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:34.103 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_4063896_0 00:06:34.103 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:34.103 associated memzone info: size: 48.002930 MiB name: MP_evtpool_4063896_0 00:06:34.103 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:34.103 associated memzone info: size: 48.002930 MiB name: MP_msgpool_4063896_0 00:06:34.103 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:34.103 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:34.103 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:34.103 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:34.103 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:34.103 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_4063896 00:06:34.103 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:34.103 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_4063896 00:06:34.103 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:34.103 associated memzone info: size: 1.007996 MiB name: MP_evtpool_4063896 00:06:34.103 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:34.103 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:34.103 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:34.103 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:34.103 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:34.103 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:34.103 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:34.103 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:34.103 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:34.103 associated memzone info: size: 1.000366 MiB name: RG_ring_0_4063896 00:06:34.103 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:34.103 associated memzone info: size: 1.000366 MiB name: RG_ring_1_4063896 00:06:34.103 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:34.103 associated memzone info: size: 1.000366 MiB name: RG_ring_4_4063896 00:06:34.103 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:34.103 associated memzone info: size: 1.000366 MiB name: RG_ring_5_4063896 00:06:34.103 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:34.103 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_4063896 00:06:34.103 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:34.103 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:34.103 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:34.103 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:34.103 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:34.103 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:34.103 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:34.103 associated memzone info: size: 0.125366 MiB name: RG_ring_2_4063896 00:06:34.103 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:34.103 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:34.103 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:34.103 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:34.103 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:34.103 associated memzone info: size: 0.015991 MiB name: RG_ring_3_4063896 00:06:34.103 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:34.103 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:34.103 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:34.103 associated memzone info: size: 0.000183 MiB name: MP_msgpool_4063896 00:06:34.103 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:34.103 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_4063896 00:06:34.103 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:34.103 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:34.103 06:32:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:34.103 06:32:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 4063896 00:06:34.103 06:32:38 -- common/autotest_common.sh@936 -- # '[' -z 4063896 ']' 00:06:34.103 06:32:38 -- common/autotest_common.sh@940 -- # kill -0 4063896 00:06:34.103 06:32:38 -- common/autotest_common.sh@941 -- # uname 00:06:34.103 06:32:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:34.103 06:32:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4063896 00:06:34.103 06:32:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:34.103 06:32:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:34.103 06:32:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4063896' 00:06:34.103 killing process with pid 4063896 00:06:34.103 06:32:38 -- common/autotest_common.sh@955 -- # kill 4063896 00:06:34.103 06:32:38 -- common/autotest_common.sh@960 -- # wait 4063896 00:06:34.669 00:06:34.669 real 0m1.070s 00:06:34.669 user 0m1.041s 00:06:34.669 sys 0m0.408s 00:06:34.669 06:32:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:34.669 06:32:39 -- common/autotest_common.sh@10 -- # set +x 00:06:34.669 ************************************ 00:06:34.669 END TEST dpdk_mem_utility 00:06:34.669 ************************************ 00:06:34.669 06:32:39 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:34.669 06:32:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:34.669 06:32:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.669 06:32:39 -- common/autotest_common.sh@10 -- # set +x 00:06:34.669 ************************************ 00:06:34.669 START TEST event 00:06:34.669 ************************************ 00:06:34.669 06:32:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:34.669 * Looking for test storage... 00:06:34.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:34.669 06:32:39 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:34.669 06:32:39 -- bdev/nbd_common.sh@6 -- # set -e 00:06:34.669 06:32:39 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:34.669 06:32:39 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:34.669 06:32:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.669 06:32:39 -- common/autotest_common.sh@10 -- # set +x 00:06:34.926 ************************************ 00:06:34.926 START TEST event_perf 00:06:34.926 ************************************ 00:06:34.926 06:32:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:34.926 Running I/O for 1 seconds...[2024-04-17 06:32:39.345988] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:06:34.926 [2024-04-17 06:32:39.346069] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4064151 ] 00:06:34.926 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.926 [2024-04-17 06:32:39.407720] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.926 [2024-04-17 06:32:39.498411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.926 [2024-04-17 06:32:39.498467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.926 [2024-04-17 06:32:39.498584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.926 [2024-04-17 06:32:39.498586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.926 [2024-04-17 06:32:39.498742] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:06:36.296 Running I/O for 1 seconds... 00:06:36.296 lcore 0: 236077 00:06:36.296 lcore 1: 236076 00:06:36.296 lcore 2: 236076 00:06:36.296 lcore 3: 236076 00:06:36.296 done. 00:06:36.296 00:06:36.296 real 0m1.249s 00:06:36.296 user 0m4.169s 00:06:36.296 sys 0m0.075s 00:06:36.296 06:32:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:36.296 06:32:40 -- common/autotest_common.sh@10 -- # set +x 00:06:36.296 ************************************ 00:06:36.296 END TEST event_perf 00:06:36.296 ************************************ 00:06:36.296 06:32:40 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:36.296 06:32:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:36.296 06:32:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.296 06:32:40 -- common/autotest_common.sh@10 -- # set +x 00:06:36.296 ************************************ 00:06:36.296 START TEST event_reactor 00:06:36.296 ************************************ 00:06:36.296 06:32:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:36.296 [2024-04-17 06:32:40.718891] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:06:36.296 [2024-04-17 06:32:40.718956] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4064385 ] 00:06:36.296 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.296 [2024-04-17 06:32:40.782490] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.296 [2024-04-17 06:32:40.870120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.296 [2024-04-17 06:32:40.870238] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:06:37.667 test_start 00:06:37.667 oneshot 00:06:37.667 tick 100 00:06:37.667 tick 100 00:06:37.667 tick 250 00:06:37.667 tick 100 00:06:37.667 tick 100 00:06:37.667 tick 100 00:06:37.667 tick 250 00:06:37.667 tick 500 00:06:37.667 tick 100 00:06:37.667 tick 100 00:06:37.667 tick 250 00:06:37.667 tick 100 00:06:37.667 tick 100 00:06:37.667 test_end 00:06:37.667 00:06:37.667 real 0m1.248s 00:06:37.667 user 0m1.160s 00:06:37.667 sys 0m0.083s 00:06:37.667 06:32:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:37.667 06:32:41 -- common/autotest_common.sh@10 -- # set +x 00:06:37.667 ************************************ 00:06:37.667 END TEST event_reactor 00:06:37.667 ************************************ 00:06:37.667 06:32:41 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:37.667 06:32:41 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:37.667 06:32:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.667 06:32:41 -- common/autotest_common.sh@10 -- # set +x 00:06:37.667 ************************************ 00:06:37.667 START TEST event_reactor_perf 00:06:37.667 ************************************ 00:06:37.667 06:32:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:37.667 [2024-04-17 06:32:42.090331] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:06:37.667 [2024-04-17 06:32:42.090395] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4064551 ] 00:06:37.667 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.667 [2024-04-17 06:32:42.155536] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.667 [2024-04-17 06:32:42.243747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.667 [2024-04-17 06:32:42.243855] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:06:39.041 test_start 00:06:39.041 test_end 00:06:39.041 Performance: 351036 events per second 00:06:39.041 00:06:39.041 real 0m1.250s 00:06:39.041 user 0m1.165s 00:06:39.041 sys 0m0.079s 00:06:39.041 06:32:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:39.041 06:32:43 -- common/autotest_common.sh@10 -- # set +x 00:06:39.041 ************************************ 00:06:39.041 END TEST event_reactor_perf 00:06:39.041 ************************************ 00:06:39.041 06:32:43 -- event/event.sh@49 -- # uname -s 00:06:39.041 06:32:43 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:39.041 06:32:43 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:39.041 06:32:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:39.041 06:32:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.041 06:32:43 -- common/autotest_common.sh@10 -- # set +x 00:06:39.041 ************************************ 00:06:39.041 START TEST event_scheduler 00:06:39.041 ************************************ 00:06:39.041 06:32:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:39.041 * Looking for test storage... 00:06:39.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:39.041 06:32:43 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:39.041 06:32:43 -- scheduler/scheduler.sh@35 -- # scheduler_pid=4064737 00:06:39.041 06:32:43 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:39.042 06:32:43 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:39.042 06:32:43 -- scheduler/scheduler.sh@37 -- # waitforlisten 4064737 00:06:39.042 06:32:43 -- common/autotest_common.sh@817 -- # '[' -z 4064737 ']' 00:06:39.042 06:32:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.042 06:32:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:39.042 06:32:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.042 06:32:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:39.042 06:32:43 -- common/autotest_common.sh@10 -- # set +x 00:06:39.042 [2024-04-17 06:32:43.544368] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:06:39.042 [2024-04-17 06:32:43.544442] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4064737 ] 00:06:39.042 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.042 [2024-04-17 06:32:43.602113] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:39.300 [2024-04-17 06:32:43.687850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.300 [2024-04-17 06:32:43.687908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.300 [2024-04-17 06:32:43.687974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.300 [2024-04-17 06:32:43.687977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.300 06:32:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:39.300 06:32:43 -- common/autotest_common.sh@850 -- # return 0 00:06:39.300 06:32:43 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:39.300 06:32:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.300 06:32:43 -- common/autotest_common.sh@10 -- # set +x 00:06:39.300 POWER: Env isn't set yet! 00:06:39.300 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:39.300 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:06:39.300 POWER: Cannot get available frequencies of lcore 0 00:06:39.300 POWER: Attempting to initialise PSTAT power management... 00:06:39.300 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:39.300 POWER: Initialized successfully for lcore 0 power management 00:06:39.300 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:39.300 POWER: Initialized successfully for lcore 1 power management 00:06:39.300 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:39.300 POWER: Initialized successfully for lcore 2 power management 00:06:39.300 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:39.300 POWER: Initialized successfully for lcore 3 power management 00:06:39.300 06:32:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.300 06:32:43 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:39.300 06:32:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.300 06:32:43 -- common/autotest_common.sh@10 -- # set +x 00:06:39.300 [2024-04-17 06:32:43.898649] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:39.300 06:32:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.300 06:32:43 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:39.300 06:32:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:39.300 06:32:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.301 06:32:43 -- common/autotest_common.sh@10 -- # set +x 00:06:39.559 ************************************ 00:06:39.559 START TEST scheduler_create_thread 00:06:39.559 ************************************ 00:06:39.559 06:32:44 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:06:39.559 06:32:44 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:39.559 06:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.559 06:32:44 -- common/autotest_common.sh@10 -- # set +x 00:06:39.559 2 00:06:39.559 06:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.559 06:32:44 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:39.559 06:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.559 06:32:44 -- common/autotest_common.sh@10 -- # set +x 00:06:39.559 3 00:06:39.559 06:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.559 06:32:44 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:39.559 06:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.559 06:32:44 -- common/autotest_common.sh@10 -- # set +x 00:06:39.559 4 00:06:39.559 06:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.559 06:32:44 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:39.559 06:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.559 06:32:44 -- common/autotest_common.sh@10 -- # set +x 00:06:39.559 5 00:06:39.559 06:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.559 06:32:44 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:39.559 06:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.559 06:32:44 -- common/autotest_common.sh@10 -- # set +x 00:06:39.559 6 00:06:39.559 06:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.559 06:32:44 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:39.559 06:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.559 06:32:44 -- common/autotest_common.sh@10 -- # set +x 00:06:39.559 7 00:06:39.559 06:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.559 06:32:44 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:39.559 06:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.559 06:32:44 -- common/autotest_common.sh@10 -- # set +x 00:06:39.559 8 00:06:39.559 06:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.559 06:32:44 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:39.559 06:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.559 06:32:44 -- common/autotest_common.sh@10 -- # set +x 00:06:39.559 9 00:06:39.559 06:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.559 06:32:44 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:39.559 06:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.559 06:32:44 -- common/autotest_common.sh@10 -- # set +x 00:06:39.559 10 00:06:39.559 06:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.559 06:32:44 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:39.559 06:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.559 06:32:44 -- common/autotest_common.sh@10 -- # set +x 00:06:39.559 06:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.559 06:32:44 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:39.559 06:32:44 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:39.559 06:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.559 06:32:44 -- common/autotest_common.sh@10 -- # set +x 00:06:40.492 06:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:40.492 06:32:44 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:40.492 06:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:40.492 06:32:44 -- common/autotest_common.sh@10 -- # set +x 00:06:41.864 06:32:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:41.864 06:32:46 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:41.864 06:32:46 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:41.864 06:32:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:41.864 06:32:46 -- common/autotest_common.sh@10 -- # set +x 00:06:42.796 06:32:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:42.796 00:06:42.796 real 0m3.382s 00:06:42.796 user 0m0.012s 00:06:42.796 sys 0m0.003s 00:06:42.796 06:32:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:42.796 06:32:47 -- common/autotest_common.sh@10 -- # set +x 00:06:42.796 ************************************ 00:06:42.796 END TEST scheduler_create_thread 00:06:42.796 ************************************ 00:06:43.054 06:32:47 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:43.054 06:32:47 -- scheduler/scheduler.sh@46 -- # killprocess 4064737 00:06:43.054 06:32:47 -- common/autotest_common.sh@936 -- # '[' -z 4064737 ']' 00:06:43.054 06:32:47 -- common/autotest_common.sh@940 -- # kill -0 4064737 00:06:43.054 06:32:47 -- common/autotest_common.sh@941 -- # uname 00:06:43.054 06:32:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:43.054 06:32:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4064737 00:06:43.054 06:32:47 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:43.054 06:32:47 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:43.054 06:32:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4064737' 00:06:43.054 killing process with pid 4064737 00:06:43.054 06:32:47 -- common/autotest_common.sh@955 -- # kill 4064737 00:06:43.054 06:32:47 -- common/autotest_common.sh@960 -- # wait 4064737 00:06:43.313 [2024-04-17 06:32:47.764143] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:43.573 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:06:43.573 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:43.573 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:06:43.573 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:43.573 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:06:43.573 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:43.573 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:06:43.573 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:43.573 00:06:43.573 real 0m4.585s 00:06:43.573 user 0m8.249s 00:06:43.573 sys 0m0.376s 00:06:43.573 06:32:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:43.573 06:32:48 -- common/autotest_common.sh@10 -- # set +x 00:06:43.573 ************************************ 00:06:43.573 END TEST event_scheduler 00:06:43.573 ************************************ 00:06:43.573 06:32:48 -- event/event.sh@51 -- # modprobe -n nbd 00:06:43.573 06:32:48 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:43.573 06:32:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:43.573 06:32:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.573 06:32:48 -- common/autotest_common.sh@10 -- # set +x 00:06:43.573 ************************************ 00:06:43.573 START TEST app_repeat 00:06:43.573 ************************************ 00:06:43.573 06:32:48 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:06:43.573 06:32:48 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.573 06:32:48 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.573 06:32:48 -- event/event.sh@13 -- # local nbd_list 00:06:43.573 06:32:48 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.573 06:32:48 -- event/event.sh@14 -- # local bdev_list 00:06:43.573 06:32:48 -- event/event.sh@15 -- # local repeat_times=4 00:06:43.573 06:32:48 -- event/event.sh@17 -- # modprobe nbd 00:06:43.573 06:32:48 -- event/event.sh@19 -- # repeat_pid=4065335 00:06:43.573 06:32:48 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:43.573 06:32:48 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:43.573 06:32:48 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 4065335' 00:06:43.573 Process app_repeat pid: 4065335 00:06:43.573 06:32:48 -- event/event.sh@23 -- # for i in {0..2} 00:06:43.573 06:32:48 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:43.573 spdk_app_start Round 0 00:06:43.573 06:32:48 -- event/event.sh@25 -- # waitforlisten 4065335 /var/tmp/spdk-nbd.sock 00:06:43.573 06:32:48 -- common/autotest_common.sh@817 -- # '[' -z 4065335 ']' 00:06:43.573 06:32:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:43.573 06:32:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:43.573 06:32:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:43.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:43.573 06:32:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:43.573 06:32:48 -- common/autotest_common.sh@10 -- # set +x 00:06:43.832 [2024-04-17 06:32:48.191803] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:06:43.832 [2024-04-17 06:32:48.191872] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4065335 ] 00:06:43.832 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.832 [2024-04-17 06:32:48.255548] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.832 [2024-04-17 06:32:48.349907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.832 [2024-04-17 06:32:48.349912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.090 06:32:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:44.090 06:32:48 -- common/autotest_common.sh@850 -- # return 0 00:06:44.090 06:32:48 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.347 Malloc0 00:06:44.348 06:32:48 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.605 Malloc1 00:06:44.605 06:32:48 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.605 06:32:48 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.605 06:32:48 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.605 06:32:48 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:44.605 06:32:48 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.605 06:32:48 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:44.605 06:32:48 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.605 06:32:48 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.605 06:32:48 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.605 06:32:48 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:44.605 06:32:48 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.605 06:32:48 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:44.605 06:32:48 -- bdev/nbd_common.sh@12 -- # local i 00:06:44.605 06:32:48 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:44.605 06:32:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.605 06:32:48 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:44.863 /dev/nbd0 00:06:44.863 06:32:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:44.863 06:32:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:44.863 06:32:49 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:44.863 06:32:49 -- common/autotest_common.sh@855 -- # local i 00:06:44.863 06:32:49 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:44.863 06:32:49 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:44.863 06:32:49 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:44.863 06:32:49 -- common/autotest_common.sh@859 -- # break 00:06:44.863 06:32:49 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:44.863 06:32:49 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:44.863 06:32:49 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:44.863 1+0 records in 00:06:44.863 1+0 records out 00:06:44.863 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178831 s, 22.9 MB/s 00:06:44.863 06:32:49 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:44.863 06:32:49 -- common/autotest_common.sh@872 -- # size=4096 00:06:44.863 06:32:49 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:44.863 06:32:49 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:44.863 06:32:49 -- common/autotest_common.sh@875 -- # return 0 00:06:44.863 06:32:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.863 06:32:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.863 06:32:49 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:45.121 /dev/nbd1 00:06:45.121 06:32:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:45.121 06:32:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:45.121 06:32:49 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:45.121 06:32:49 -- common/autotest_common.sh@855 -- # local i 00:06:45.121 06:32:49 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:45.121 06:32:49 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:45.121 06:32:49 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:45.121 06:32:49 -- common/autotest_common.sh@859 -- # break 00:06:45.121 06:32:49 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:45.121 06:32:49 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:45.121 06:32:49 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.121 1+0 records in 00:06:45.121 1+0 records out 00:06:45.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183805 s, 22.3 MB/s 00:06:45.121 06:32:49 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:45.121 06:32:49 -- common/autotest_common.sh@872 -- # size=4096 00:06:45.121 06:32:49 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:45.121 06:32:49 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:45.121 06:32:49 -- common/autotest_common.sh@875 -- # return 0 00:06:45.121 06:32:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.121 06:32:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.121 06:32:49 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.121 06:32:49 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.121 06:32:49 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:45.380 { 00:06:45.380 "nbd_device": "/dev/nbd0", 00:06:45.380 "bdev_name": "Malloc0" 00:06:45.380 }, 00:06:45.380 { 00:06:45.380 "nbd_device": "/dev/nbd1", 00:06:45.380 "bdev_name": "Malloc1" 00:06:45.380 } 00:06:45.380 ]' 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:45.380 { 00:06:45.380 "nbd_device": "/dev/nbd0", 00:06:45.380 "bdev_name": "Malloc0" 00:06:45.380 }, 00:06:45.380 { 00:06:45.380 "nbd_device": "/dev/nbd1", 00:06:45.380 "bdev_name": "Malloc1" 00:06:45.380 } 00:06:45.380 ]' 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:45.380 /dev/nbd1' 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:45.380 /dev/nbd1' 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@65 -- # count=2 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@95 -- # count=2 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:45.380 256+0 records in 00:06:45.380 256+0 records out 00:06:45.380 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00508526 s, 206 MB/s 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:45.380 256+0 records in 00:06:45.380 256+0 records out 00:06:45.380 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023856 s, 44.0 MB/s 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:45.380 256+0 records in 00:06:45.380 256+0 records out 00:06:45.380 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252631 s, 41.5 MB/s 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@51 -- # local i 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.380 06:32:49 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:45.638 06:32:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:45.638 06:32:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:45.638 06:32:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:45.638 06:32:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.638 06:32:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.638 06:32:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:45.638 06:32:50 -- bdev/nbd_common.sh@41 -- # break 00:06:45.638 06:32:50 -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.638 06:32:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.638 06:32:50 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:45.901 06:32:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:45.901 06:32:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:45.901 06:32:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:45.901 06:32:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.901 06:32:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.901 06:32:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:45.901 06:32:50 -- bdev/nbd_common.sh@41 -- # break 00:06:45.901 06:32:50 -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.901 06:32:50 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.901 06:32:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.901 06:32:50 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.192 06:32:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:46.192 06:32:50 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:46.192 06:32:50 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.192 06:32:50 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:46.192 06:32:50 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:46.192 06:32:50 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.192 06:32:50 -- bdev/nbd_common.sh@65 -- # true 00:06:46.192 06:32:50 -- bdev/nbd_common.sh@65 -- # count=0 00:06:46.192 06:32:50 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:46.192 06:32:50 -- bdev/nbd_common.sh@104 -- # count=0 00:06:46.192 06:32:50 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:46.192 06:32:50 -- bdev/nbd_common.sh@109 -- # return 0 00:06:46.192 06:32:50 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:46.449 06:32:50 -- event/event.sh@35 -- # sleep 3 00:06:46.707 [2024-04-17 06:32:51.183210] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:46.707 [2024-04-17 06:32:51.270750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.707 [2024-04-17 06:32:51.270753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.964 [2024-04-17 06:32:51.332950] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:46.964 [2024-04-17 06:32:51.333026] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:49.487 06:32:53 -- event/event.sh@23 -- # for i in {0..2} 00:06:49.487 06:32:53 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:49.487 spdk_app_start Round 1 00:06:49.487 06:32:53 -- event/event.sh@25 -- # waitforlisten 4065335 /var/tmp/spdk-nbd.sock 00:06:49.487 06:32:53 -- common/autotest_common.sh@817 -- # '[' -z 4065335 ']' 00:06:49.487 06:32:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:49.487 06:32:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:49.487 06:32:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:49.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:49.487 06:32:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:49.487 06:32:53 -- common/autotest_common.sh@10 -- # set +x 00:06:49.744 06:32:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:49.744 06:32:54 -- common/autotest_common.sh@850 -- # return 0 00:06:49.744 06:32:54 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:50.002 Malloc0 00:06:50.002 06:32:54 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:50.260 Malloc1 00:06:50.260 06:32:54 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:50.260 06:32:54 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.260 06:32:54 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.260 06:32:54 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:50.260 06:32:54 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.260 06:32:54 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:50.260 06:32:54 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:50.260 06:32:54 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.260 06:32:54 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.260 06:32:54 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:50.260 06:32:54 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.260 06:32:54 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:50.260 06:32:54 -- bdev/nbd_common.sh@12 -- # local i 00:06:50.260 06:32:54 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:50.260 06:32:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.260 06:32:54 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:50.518 /dev/nbd0 00:06:50.518 06:32:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:50.518 06:32:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:50.518 06:32:54 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:50.518 06:32:54 -- common/autotest_common.sh@855 -- # local i 00:06:50.518 06:32:54 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:50.518 06:32:54 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:50.518 06:32:54 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:50.518 06:32:54 -- common/autotest_common.sh@859 -- # break 00:06:50.518 06:32:54 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:50.518 06:32:54 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:50.518 06:32:54 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:50.518 1+0 records in 00:06:50.518 1+0 records out 00:06:50.518 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196084 s, 20.9 MB/s 00:06:50.518 06:32:54 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:50.518 06:32:54 -- common/autotest_common.sh@872 -- # size=4096 00:06:50.518 06:32:54 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:50.518 06:32:54 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:50.518 06:32:54 -- common/autotest_common.sh@875 -- # return 0 00:06:50.518 06:32:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.518 06:32:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.518 06:32:54 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:50.776 /dev/nbd1 00:06:50.776 06:32:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:50.776 06:32:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:50.776 06:32:55 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:50.776 06:32:55 -- common/autotest_common.sh@855 -- # local i 00:06:50.776 06:32:55 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:50.776 06:32:55 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:50.776 06:32:55 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:50.776 06:32:55 -- common/autotest_common.sh@859 -- # break 00:06:50.776 06:32:55 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:50.776 06:32:55 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:50.776 06:32:55 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:50.776 1+0 records in 00:06:50.776 1+0 records out 00:06:50.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213161 s, 19.2 MB/s 00:06:50.776 06:32:55 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:50.776 06:32:55 -- common/autotest_common.sh@872 -- # size=4096 00:06:50.776 06:32:55 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:50.776 06:32:55 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:50.776 06:32:55 -- common/autotest_common.sh@875 -- # return 0 00:06:50.776 06:32:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.776 06:32:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.776 06:32:55 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.776 06:32:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.776 06:32:55 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:51.034 { 00:06:51.034 "nbd_device": "/dev/nbd0", 00:06:51.034 "bdev_name": "Malloc0" 00:06:51.034 }, 00:06:51.034 { 00:06:51.034 "nbd_device": "/dev/nbd1", 00:06:51.034 "bdev_name": "Malloc1" 00:06:51.034 } 00:06:51.034 ]' 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:51.034 { 00:06:51.034 "nbd_device": "/dev/nbd0", 00:06:51.034 "bdev_name": "Malloc0" 00:06:51.034 }, 00:06:51.034 { 00:06:51.034 "nbd_device": "/dev/nbd1", 00:06:51.034 "bdev_name": "Malloc1" 00:06:51.034 } 00:06:51.034 ]' 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:51.034 /dev/nbd1' 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:51.034 /dev/nbd1' 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@65 -- # count=2 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@95 -- # count=2 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:51.034 256+0 records in 00:06:51.034 256+0 records out 00:06:51.034 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00511408 s, 205 MB/s 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:51.034 256+0 records in 00:06:51.034 256+0 records out 00:06:51.034 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238543 s, 44.0 MB/s 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:51.034 256+0 records in 00:06:51.034 256+0 records out 00:06:51.034 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252911 s, 41.5 MB/s 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@51 -- # local i 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.034 06:32:55 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:51.292 06:32:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:51.292 06:32:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:51.292 06:32:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:51.292 06:32:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.292 06:32:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.292 06:32:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:51.292 06:32:55 -- bdev/nbd_common.sh@41 -- # break 00:06:51.292 06:32:55 -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.292 06:32:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.292 06:32:55 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:51.550 06:32:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:51.550 06:32:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:51.550 06:32:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:51.550 06:32:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.550 06:32:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.550 06:32:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:51.550 06:32:56 -- bdev/nbd_common.sh@41 -- # break 00:06:51.550 06:32:56 -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.550 06:32:56 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:51.550 06:32:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.550 06:32:56 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.807 06:32:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:51.807 06:32:56 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:51.807 06:32:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.807 06:32:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:51.807 06:32:56 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:51.807 06:32:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.807 06:32:56 -- bdev/nbd_common.sh@65 -- # true 00:06:51.807 06:32:56 -- bdev/nbd_common.sh@65 -- # count=0 00:06:51.807 06:32:56 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:51.807 06:32:56 -- bdev/nbd_common.sh@104 -- # count=0 00:06:51.807 06:32:56 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:51.807 06:32:56 -- bdev/nbd_common.sh@109 -- # return 0 00:06:51.807 06:32:56 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:52.065 06:32:56 -- event/event.sh@35 -- # sleep 3 00:06:52.323 [2024-04-17 06:32:56.848522] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:52.582 [2024-04-17 06:32:56.937610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.582 [2024-04-17 06:32:56.937614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.582 [2024-04-17 06:32:57.000437] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:52.582 [2024-04-17 06:32:57.000522] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:55.107 06:32:59 -- event/event.sh@23 -- # for i in {0..2} 00:06:55.107 06:32:59 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:55.107 spdk_app_start Round 2 00:06:55.107 06:32:59 -- event/event.sh@25 -- # waitforlisten 4065335 /var/tmp/spdk-nbd.sock 00:06:55.107 06:32:59 -- common/autotest_common.sh@817 -- # '[' -z 4065335 ']' 00:06:55.107 06:32:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:55.107 06:32:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:55.107 06:32:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:55.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:55.107 06:32:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:55.107 06:32:59 -- common/autotest_common.sh@10 -- # set +x 00:06:55.365 06:32:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:55.365 06:32:59 -- common/autotest_common.sh@850 -- # return 0 00:06:55.365 06:32:59 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:55.623 Malloc0 00:06:55.623 06:33:00 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:55.882 Malloc1 00:06:55.882 06:33:00 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:55.882 06:33:00 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.882 06:33:00 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:55.882 06:33:00 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:55.882 06:33:00 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.882 06:33:00 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:55.882 06:33:00 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:55.882 06:33:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.882 06:33:00 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:55.882 06:33:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:55.882 06:33:00 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.882 06:33:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:55.882 06:33:00 -- bdev/nbd_common.sh@12 -- # local i 00:06:55.882 06:33:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:55.882 06:33:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:55.882 06:33:00 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:56.140 /dev/nbd0 00:06:56.140 06:33:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:56.140 06:33:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:56.140 06:33:00 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:56.140 06:33:00 -- common/autotest_common.sh@855 -- # local i 00:06:56.140 06:33:00 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:56.140 06:33:00 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:56.140 06:33:00 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:56.140 06:33:00 -- common/autotest_common.sh@859 -- # break 00:06:56.140 06:33:00 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:56.140 06:33:00 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:56.140 06:33:00 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:56.140 1+0 records in 00:06:56.140 1+0 records out 00:06:56.140 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177214 s, 23.1 MB/s 00:06:56.140 06:33:00 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.140 06:33:00 -- common/autotest_common.sh@872 -- # size=4096 00:06:56.140 06:33:00 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.140 06:33:00 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:56.140 06:33:00 -- common/autotest_common.sh@875 -- # return 0 00:06:56.140 06:33:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.140 06:33:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.140 06:33:00 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:56.398 /dev/nbd1 00:06:56.398 06:33:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:56.398 06:33:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:56.398 06:33:00 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:56.398 06:33:00 -- common/autotest_common.sh@855 -- # local i 00:06:56.398 06:33:00 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:56.398 06:33:00 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:56.398 06:33:00 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:56.398 06:33:00 -- common/autotest_common.sh@859 -- # break 00:06:56.398 06:33:00 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:56.398 06:33:00 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:56.398 06:33:00 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:56.398 1+0 records in 00:06:56.398 1+0 records out 00:06:56.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0001881 s, 21.8 MB/s 00:06:56.398 06:33:00 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.398 06:33:00 -- common/autotest_common.sh@872 -- # size=4096 00:06:56.398 06:33:00 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.398 06:33:00 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:56.398 06:33:00 -- common/autotest_common.sh@875 -- # return 0 00:06:56.398 06:33:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.398 06:33:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.398 06:33:00 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:56.398 06:33:00 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.398 06:33:00 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:56.656 06:33:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:56.656 { 00:06:56.656 "nbd_device": "/dev/nbd0", 00:06:56.656 "bdev_name": "Malloc0" 00:06:56.656 }, 00:06:56.656 { 00:06:56.656 "nbd_device": "/dev/nbd1", 00:06:56.656 "bdev_name": "Malloc1" 00:06:56.656 } 00:06:56.656 ]' 00:06:56.656 06:33:01 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:56.656 { 00:06:56.656 "nbd_device": "/dev/nbd0", 00:06:56.656 "bdev_name": "Malloc0" 00:06:56.656 }, 00:06:56.656 { 00:06:56.656 "nbd_device": "/dev/nbd1", 00:06:56.656 "bdev_name": "Malloc1" 00:06:56.656 } 00:06:56.656 ]' 00:06:56.656 06:33:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:56.656 06:33:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:56.656 /dev/nbd1' 00:06:56.656 06:33:01 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:56.656 /dev/nbd1' 00:06:56.656 06:33:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:56.656 06:33:01 -- bdev/nbd_common.sh@65 -- # count=2 00:06:56.656 06:33:01 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:56.656 06:33:01 -- bdev/nbd_common.sh@95 -- # count=2 00:06:56.656 06:33:01 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:56.656 06:33:01 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:56.656 06:33:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.656 06:33:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:56.656 06:33:01 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:56.656 06:33:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:56.656 06:33:01 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:56.656 06:33:01 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:56.656 256+0 records in 00:06:56.656 256+0 records out 00:06:56.656 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00391667 s, 268 MB/s 00:06:56.656 06:33:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.656 06:33:01 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:56.913 256+0 records in 00:06:56.913 256+0 records out 00:06:56.913 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246857 s, 42.5 MB/s 00:06:56.913 06:33:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.913 06:33:01 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:56.913 256+0 records in 00:06:56.913 256+0 records out 00:06:56.913 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255482 s, 41.0 MB/s 00:06:56.913 06:33:01 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:56.913 06:33:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.913 06:33:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:56.913 06:33:01 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:56.913 06:33:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:56.913 06:33:01 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:56.913 06:33:01 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:56.913 06:33:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.913 06:33:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:56.913 06:33:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.913 06:33:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:56.913 06:33:01 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:56.913 06:33:01 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:56.913 06:33:01 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.914 06:33:01 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.914 06:33:01 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:56.914 06:33:01 -- bdev/nbd_common.sh@51 -- # local i 00:06:56.914 06:33:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.914 06:33:01 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:57.172 06:33:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:57.172 06:33:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:57.172 06:33:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:57.172 06:33:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.172 06:33:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.172 06:33:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:57.172 06:33:01 -- bdev/nbd_common.sh@41 -- # break 00:06:57.172 06:33:01 -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.172 06:33:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.172 06:33:01 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:57.430 06:33:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:57.430 06:33:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:57.430 06:33:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:57.430 06:33:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.430 06:33:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.430 06:33:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:57.430 06:33:01 -- bdev/nbd_common.sh@41 -- # break 00:06:57.430 06:33:01 -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.430 06:33:01 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:57.430 06:33:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.430 06:33:01 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.687 06:33:02 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:57.687 06:33:02 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:57.687 06:33:02 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.687 06:33:02 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:57.687 06:33:02 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:57.687 06:33:02 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.687 06:33:02 -- bdev/nbd_common.sh@65 -- # true 00:06:57.687 06:33:02 -- bdev/nbd_common.sh@65 -- # count=0 00:06:57.687 06:33:02 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:57.687 06:33:02 -- bdev/nbd_common.sh@104 -- # count=0 00:06:57.687 06:33:02 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:57.687 06:33:02 -- bdev/nbd_common.sh@109 -- # return 0 00:06:57.687 06:33:02 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:57.945 06:33:02 -- event/event.sh@35 -- # sleep 3 00:06:58.203 [2024-04-17 06:33:02.622174] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:58.203 [2024-04-17 06:33:02.710595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.203 [2024-04-17 06:33:02.710597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.203 [2024-04-17 06:33:02.773438] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:58.203 [2024-04-17 06:33:02.773529] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:01.514 06:33:05 -- event/event.sh@38 -- # waitforlisten 4065335 /var/tmp/spdk-nbd.sock 00:07:01.514 06:33:05 -- common/autotest_common.sh@817 -- # '[' -z 4065335 ']' 00:07:01.514 06:33:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:01.514 06:33:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:01.514 06:33:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:01.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:01.515 06:33:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:01.515 06:33:05 -- common/autotest_common.sh@10 -- # set +x 00:07:01.515 06:33:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:01.515 06:33:05 -- common/autotest_common.sh@850 -- # return 0 00:07:01.515 06:33:05 -- event/event.sh@39 -- # killprocess 4065335 00:07:01.515 06:33:05 -- common/autotest_common.sh@936 -- # '[' -z 4065335 ']' 00:07:01.515 06:33:05 -- common/autotest_common.sh@940 -- # kill -0 4065335 00:07:01.515 06:33:05 -- common/autotest_common.sh@941 -- # uname 00:07:01.515 06:33:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:01.515 06:33:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4065335 00:07:01.515 06:33:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:01.515 06:33:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:01.515 06:33:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4065335' 00:07:01.515 killing process with pid 4065335 00:07:01.515 06:33:05 -- common/autotest_common.sh@955 -- # kill 4065335 00:07:01.515 06:33:05 -- common/autotest_common.sh@960 -- # wait 4065335 00:07:01.515 spdk_app_start is called in Round 0. 00:07:01.515 Shutdown signal received, stop current app iteration 00:07:01.515 Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 reinitialization... 00:07:01.515 spdk_app_start is called in Round 1. 00:07:01.515 Shutdown signal received, stop current app iteration 00:07:01.515 Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 reinitialization... 00:07:01.515 spdk_app_start is called in Round 2. 00:07:01.515 Shutdown signal received, stop current app iteration 00:07:01.515 Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 reinitialization... 00:07:01.515 spdk_app_start is called in Round 3. 00:07:01.515 Shutdown signal received, stop current app iteration 00:07:01.515 06:33:05 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:01.515 06:33:05 -- event/event.sh@42 -- # return 0 00:07:01.515 00:07:01.515 real 0m17.694s 00:07:01.515 user 0m38.894s 00:07:01.515 sys 0m3.322s 00:07:01.515 06:33:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:01.515 06:33:05 -- common/autotest_common.sh@10 -- # set +x 00:07:01.515 ************************************ 00:07:01.515 END TEST app_repeat 00:07:01.515 ************************************ 00:07:01.515 06:33:05 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:01.515 06:33:05 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:01.515 06:33:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:01.515 06:33:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.515 06:33:05 -- common/autotest_common.sh@10 -- # set +x 00:07:01.515 ************************************ 00:07:01.515 START TEST cpu_locks 00:07:01.515 ************************************ 00:07:01.515 06:33:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:01.515 * Looking for test storage... 00:07:01.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:01.515 06:33:06 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:01.515 06:33:06 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:01.515 06:33:06 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:01.515 06:33:06 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:01.515 06:33:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:01.515 06:33:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.515 06:33:06 -- common/autotest_common.sh@10 -- # set +x 00:07:01.773 ************************************ 00:07:01.773 START TEST default_locks 00:07:01.773 ************************************ 00:07:01.773 06:33:06 -- common/autotest_common.sh@1111 -- # default_locks 00:07:01.773 06:33:06 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=4067811 00:07:01.773 06:33:06 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:01.773 06:33:06 -- event/cpu_locks.sh@47 -- # waitforlisten 4067811 00:07:01.773 06:33:06 -- common/autotest_common.sh@817 -- # '[' -z 4067811 ']' 00:07:01.773 06:33:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.773 06:33:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:01.773 06:33:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.773 06:33:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:01.773 06:33:06 -- common/autotest_common.sh@10 -- # set +x 00:07:01.773 [2024-04-17 06:33:06.191832] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:01.773 [2024-04-17 06:33:06.191928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4067811 ] 00:07:01.773 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.773 [2024-04-17 06:33:06.253806] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.773 [2024-04-17 06:33:06.348103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.032 06:33:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:02.032 06:33:06 -- common/autotest_common.sh@850 -- # return 0 00:07:02.032 06:33:06 -- event/cpu_locks.sh@49 -- # locks_exist 4067811 00:07:02.032 06:33:06 -- event/cpu_locks.sh@22 -- # lslocks -p 4067811 00:07:02.032 06:33:06 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:02.597 lslocks: write error 00:07:02.597 06:33:06 -- event/cpu_locks.sh@50 -- # killprocess 4067811 00:07:02.597 06:33:06 -- common/autotest_common.sh@936 -- # '[' -z 4067811 ']' 00:07:02.597 06:33:06 -- common/autotest_common.sh@940 -- # kill -0 4067811 00:07:02.597 06:33:06 -- common/autotest_common.sh@941 -- # uname 00:07:02.597 06:33:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:02.597 06:33:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4067811 00:07:02.597 06:33:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:02.597 06:33:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:02.597 06:33:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4067811' 00:07:02.597 killing process with pid 4067811 00:07:02.597 06:33:06 -- common/autotest_common.sh@955 -- # kill 4067811 00:07:02.597 06:33:06 -- common/autotest_common.sh@960 -- # wait 4067811 00:07:02.855 06:33:07 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 4067811 00:07:02.855 06:33:07 -- common/autotest_common.sh@638 -- # local es=0 00:07:02.855 06:33:07 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 4067811 00:07:02.855 06:33:07 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:07:02.855 06:33:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:02.855 06:33:07 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:07:02.855 06:33:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:02.855 06:33:07 -- common/autotest_common.sh@641 -- # waitforlisten 4067811 00:07:02.855 06:33:07 -- common/autotest_common.sh@817 -- # '[' -z 4067811 ']' 00:07:02.855 06:33:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.855 06:33:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:02.855 06:33:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.855 06:33:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:02.855 06:33:07 -- common/autotest_common.sh@10 -- # set +x 00:07:02.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (4067811) - No such process 00:07:02.855 ERROR: process (pid: 4067811) is no longer running 00:07:02.855 06:33:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:02.855 06:33:07 -- common/autotest_common.sh@850 -- # return 1 00:07:02.855 06:33:07 -- common/autotest_common.sh@641 -- # es=1 00:07:02.855 06:33:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:02.855 06:33:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:02.855 06:33:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:02.855 06:33:07 -- event/cpu_locks.sh@54 -- # no_locks 00:07:02.855 06:33:07 -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:02.855 06:33:07 -- event/cpu_locks.sh@26 -- # local lock_files 00:07:02.855 06:33:07 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:02.855 00:07:02.855 real 0m1.246s 00:07:02.855 user 0m1.212s 00:07:02.855 sys 0m0.542s 00:07:02.855 06:33:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:02.855 06:33:07 -- common/autotest_common.sh@10 -- # set +x 00:07:02.855 ************************************ 00:07:02.855 END TEST default_locks 00:07:02.855 ************************************ 00:07:02.856 06:33:07 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:02.856 06:33:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:02.856 06:33:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.856 06:33:07 -- common/autotest_common.sh@10 -- # set +x 00:07:03.114 ************************************ 00:07:03.114 START TEST default_locks_via_rpc 00:07:03.114 ************************************ 00:07:03.114 06:33:07 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:07:03.114 06:33:07 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=4068295 00:07:03.114 06:33:07 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:03.114 06:33:07 -- event/cpu_locks.sh@63 -- # waitforlisten 4068295 00:07:03.114 06:33:07 -- common/autotest_common.sh@817 -- # '[' -z 4068295 ']' 00:07:03.114 06:33:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.114 06:33:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:03.114 06:33:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.114 06:33:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:03.114 06:33:07 -- common/autotest_common.sh@10 -- # set +x 00:07:03.114 [2024-04-17 06:33:07.552388] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:03.114 [2024-04-17 06:33:07.552474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4068295 ] 00:07:03.114 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.114 [2024-04-17 06:33:07.613431] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.114 [2024-04-17 06:33:07.707566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.373 06:33:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:03.373 06:33:07 -- common/autotest_common.sh@850 -- # return 0 00:07:03.373 06:33:07 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:03.373 06:33:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:03.373 06:33:07 -- common/autotest_common.sh@10 -- # set +x 00:07:03.373 06:33:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:03.373 06:33:07 -- event/cpu_locks.sh@67 -- # no_locks 00:07:03.373 06:33:07 -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:03.373 06:33:07 -- event/cpu_locks.sh@26 -- # local lock_files 00:07:03.373 06:33:07 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:03.373 06:33:07 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:03.373 06:33:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:03.373 06:33:07 -- common/autotest_common.sh@10 -- # set +x 00:07:03.630 06:33:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:03.630 06:33:07 -- event/cpu_locks.sh@71 -- # locks_exist 4068295 00:07:03.630 06:33:07 -- event/cpu_locks.sh@22 -- # lslocks -p 4068295 00:07:03.630 06:33:07 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:03.888 06:33:08 -- event/cpu_locks.sh@73 -- # killprocess 4068295 00:07:03.888 06:33:08 -- common/autotest_common.sh@936 -- # '[' -z 4068295 ']' 00:07:03.888 06:33:08 -- common/autotest_common.sh@940 -- # kill -0 4068295 00:07:03.888 06:33:08 -- common/autotest_common.sh@941 -- # uname 00:07:03.888 06:33:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:03.888 06:33:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4068295 00:07:03.888 06:33:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:03.888 06:33:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:03.888 06:33:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4068295' 00:07:03.888 killing process with pid 4068295 00:07:03.888 06:33:08 -- common/autotest_common.sh@955 -- # kill 4068295 00:07:03.888 06:33:08 -- common/autotest_common.sh@960 -- # wait 4068295 00:07:04.146 00:07:04.146 real 0m1.157s 00:07:04.147 user 0m1.114s 00:07:04.147 sys 0m0.527s 00:07:04.147 06:33:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:04.147 06:33:08 -- common/autotest_common.sh@10 -- # set +x 00:07:04.147 ************************************ 00:07:04.147 END TEST default_locks_via_rpc 00:07:04.147 ************************************ 00:07:04.147 06:33:08 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:04.147 06:33:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:04.147 06:33:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.147 06:33:08 -- common/autotest_common.sh@10 -- # set +x 00:07:04.405 ************************************ 00:07:04.405 START TEST non_locking_app_on_locked_coremask 00:07:04.405 ************************************ 00:07:04.405 06:33:08 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:07:04.405 06:33:08 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=4068756 00:07:04.405 06:33:08 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:04.405 06:33:08 -- event/cpu_locks.sh@81 -- # waitforlisten 4068756 /var/tmp/spdk.sock 00:07:04.405 06:33:08 -- common/autotest_common.sh@817 -- # '[' -z 4068756 ']' 00:07:04.405 06:33:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.405 06:33:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:04.405 06:33:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.405 06:33:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:04.405 06:33:08 -- common/autotest_common.sh@10 -- # set +x 00:07:04.405 [2024-04-17 06:33:08.836213] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:04.405 [2024-04-17 06:33:08.836320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4068756 ] 00:07:04.405 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.405 [2024-04-17 06:33:08.898038] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.405 [2024-04-17 06:33:08.984090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.663 06:33:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:04.663 06:33:09 -- common/autotest_common.sh@850 -- # return 0 00:07:04.663 06:33:09 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=4068782 00:07:04.663 06:33:09 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:04.663 06:33:09 -- event/cpu_locks.sh@85 -- # waitforlisten 4068782 /var/tmp/spdk2.sock 00:07:04.663 06:33:09 -- common/autotest_common.sh@817 -- # '[' -z 4068782 ']' 00:07:04.663 06:33:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:04.663 06:33:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:04.663 06:33:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:04.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:04.663 06:33:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:04.663 06:33:09 -- common/autotest_common.sh@10 -- # set +x 00:07:04.921 [2024-04-17 06:33:09.290432] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:04.921 [2024-04-17 06:33:09.290551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4068782 ] 00:07:04.921 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.921 [2024-04-17 06:33:09.391252] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:04.921 [2024-04-17 06:33:09.391293] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.179 [2024-04-17 06:33:09.574921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.745 06:33:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:05.745 06:33:10 -- common/autotest_common.sh@850 -- # return 0 00:07:05.745 06:33:10 -- event/cpu_locks.sh@87 -- # locks_exist 4068756 00:07:05.745 06:33:10 -- event/cpu_locks.sh@22 -- # lslocks -p 4068756 00:07:05.745 06:33:10 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:06.003 lslocks: write error 00:07:06.003 06:33:10 -- event/cpu_locks.sh@89 -- # killprocess 4068756 00:07:06.003 06:33:10 -- common/autotest_common.sh@936 -- # '[' -z 4068756 ']' 00:07:06.003 06:33:10 -- common/autotest_common.sh@940 -- # kill -0 4068756 00:07:06.003 06:33:10 -- common/autotest_common.sh@941 -- # uname 00:07:06.003 06:33:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:06.003 06:33:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4068756 00:07:06.003 06:33:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:06.003 06:33:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:06.003 06:33:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4068756' 00:07:06.003 killing process with pid 4068756 00:07:06.003 06:33:10 -- common/autotest_common.sh@955 -- # kill 4068756 00:07:06.003 06:33:10 -- common/autotest_common.sh@960 -- # wait 4068756 00:07:06.936 06:33:11 -- event/cpu_locks.sh@90 -- # killprocess 4068782 00:07:06.936 06:33:11 -- common/autotest_common.sh@936 -- # '[' -z 4068782 ']' 00:07:06.936 06:33:11 -- common/autotest_common.sh@940 -- # kill -0 4068782 00:07:06.936 06:33:11 -- common/autotest_common.sh@941 -- # uname 00:07:06.936 06:33:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:06.936 06:33:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4068782 00:07:06.936 06:33:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:06.936 06:33:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:06.936 06:33:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4068782' 00:07:06.936 killing process with pid 4068782 00:07:06.936 06:33:11 -- common/autotest_common.sh@955 -- # kill 4068782 00:07:06.936 06:33:11 -- common/autotest_common.sh@960 -- # wait 4068782 00:07:07.503 00:07:07.503 real 0m3.069s 00:07:07.503 user 0m3.186s 00:07:07.503 sys 0m1.042s 00:07:07.503 06:33:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:07.503 06:33:11 -- common/autotest_common.sh@10 -- # set +x 00:07:07.503 ************************************ 00:07:07.503 END TEST non_locking_app_on_locked_coremask 00:07:07.503 ************************************ 00:07:07.503 06:33:11 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:07.503 06:33:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:07.503 06:33:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.503 06:33:11 -- common/autotest_common.sh@10 -- # set +x 00:07:07.503 ************************************ 00:07:07.503 START TEST locking_app_on_unlocked_coremask 00:07:07.503 ************************************ 00:07:07.503 06:33:11 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:07:07.503 06:33:11 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=4069220 00:07:07.503 06:33:11 -- event/cpu_locks.sh@99 -- # waitforlisten 4069220 /var/tmp/spdk.sock 00:07:07.503 06:33:11 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:07.503 06:33:11 -- common/autotest_common.sh@817 -- # '[' -z 4069220 ']' 00:07:07.503 06:33:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.503 06:33:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:07.503 06:33:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.503 06:33:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:07.503 06:33:11 -- common/autotest_common.sh@10 -- # set +x 00:07:07.503 [2024-04-17 06:33:12.025940] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:07.503 [2024-04-17 06:33:12.026018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4069220 ] 00:07:07.503 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.503 [2024-04-17 06:33:12.085035] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:07.503 [2024-04-17 06:33:12.085076] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.761 [2024-04-17 06:33:12.172772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.020 06:33:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:08.020 06:33:12 -- common/autotest_common.sh@850 -- # return 0 00:07:08.020 06:33:12 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=4069230 00:07:08.020 06:33:12 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:08.020 06:33:12 -- event/cpu_locks.sh@103 -- # waitforlisten 4069230 /var/tmp/spdk2.sock 00:07:08.020 06:33:12 -- common/autotest_common.sh@817 -- # '[' -z 4069230 ']' 00:07:08.020 06:33:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:08.020 06:33:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:08.020 06:33:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:08.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:08.020 06:33:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:08.020 06:33:12 -- common/autotest_common.sh@10 -- # set +x 00:07:08.020 [2024-04-17 06:33:12.477474] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:08.020 [2024-04-17 06:33:12.477576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4069230 ] 00:07:08.020 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.020 [2024-04-17 06:33:12.574425] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.278 [2024-04-17 06:33:12.753625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.844 06:33:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:08.845 06:33:13 -- common/autotest_common.sh@850 -- # return 0 00:07:08.845 06:33:13 -- event/cpu_locks.sh@105 -- # locks_exist 4069230 00:07:08.845 06:33:13 -- event/cpu_locks.sh@22 -- # lslocks -p 4069230 00:07:08.845 06:33:13 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:09.779 lslocks: write error 00:07:09.779 06:33:14 -- event/cpu_locks.sh@107 -- # killprocess 4069220 00:07:09.779 06:33:14 -- common/autotest_common.sh@936 -- # '[' -z 4069220 ']' 00:07:09.779 06:33:14 -- common/autotest_common.sh@940 -- # kill -0 4069220 00:07:09.779 06:33:14 -- common/autotest_common.sh@941 -- # uname 00:07:09.779 06:33:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:09.779 06:33:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4069220 00:07:09.779 06:33:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:09.779 06:33:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:09.779 06:33:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4069220' 00:07:09.779 killing process with pid 4069220 00:07:09.779 06:33:14 -- common/autotest_common.sh@955 -- # kill 4069220 00:07:09.779 06:33:14 -- common/autotest_common.sh@960 -- # wait 4069220 00:07:10.345 06:33:14 -- event/cpu_locks.sh@108 -- # killprocess 4069230 00:07:10.345 06:33:14 -- common/autotest_common.sh@936 -- # '[' -z 4069230 ']' 00:07:10.345 06:33:14 -- common/autotest_common.sh@940 -- # kill -0 4069230 00:07:10.345 06:33:14 -- common/autotest_common.sh@941 -- # uname 00:07:10.345 06:33:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:10.345 06:33:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4069230 00:07:10.345 06:33:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:10.345 06:33:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:10.345 06:33:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4069230' 00:07:10.345 killing process with pid 4069230 00:07:10.345 06:33:14 -- common/autotest_common.sh@955 -- # kill 4069230 00:07:10.345 06:33:14 -- common/autotest_common.sh@960 -- # wait 4069230 00:07:10.912 00:07:10.912 real 0m3.322s 00:07:10.912 user 0m3.432s 00:07:10.912 sys 0m1.093s 00:07:10.912 06:33:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:10.912 06:33:15 -- common/autotest_common.sh@10 -- # set +x 00:07:10.912 ************************************ 00:07:10.912 END TEST locking_app_on_unlocked_coremask 00:07:10.912 ************************************ 00:07:10.912 06:33:15 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:10.912 06:33:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:10.912 06:33:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.912 06:33:15 -- common/autotest_common.sh@10 -- # set +x 00:07:10.912 ************************************ 00:07:10.912 START TEST locking_app_on_locked_coremask 00:07:10.912 ************************************ 00:07:10.912 06:33:15 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:07:10.912 06:33:15 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=4069668 00:07:10.912 06:33:15 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:10.912 06:33:15 -- event/cpu_locks.sh@116 -- # waitforlisten 4069668 /var/tmp/spdk.sock 00:07:10.912 06:33:15 -- common/autotest_common.sh@817 -- # '[' -z 4069668 ']' 00:07:10.912 06:33:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.912 06:33:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:10.912 06:33:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.912 06:33:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:10.912 06:33:15 -- common/autotest_common.sh@10 -- # set +x 00:07:10.912 [2024-04-17 06:33:15.466917] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:10.912 [2024-04-17 06:33:15.466995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4069668 ] 00:07:10.912 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.171 [2024-04-17 06:33:15.525296] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.171 [2024-04-17 06:33:15.612288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.429 06:33:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:11.429 06:33:15 -- common/autotest_common.sh@850 -- # return 0 00:07:11.429 06:33:15 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=4069672 00:07:11.429 06:33:15 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:11.429 06:33:15 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 4069672 /var/tmp/spdk2.sock 00:07:11.429 06:33:15 -- common/autotest_common.sh@638 -- # local es=0 00:07:11.429 06:33:15 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 4069672 /var/tmp/spdk2.sock 00:07:11.429 06:33:15 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:07:11.429 06:33:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:11.429 06:33:15 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:07:11.429 06:33:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:11.429 06:33:15 -- common/autotest_common.sh@641 -- # waitforlisten 4069672 /var/tmp/spdk2.sock 00:07:11.429 06:33:15 -- common/autotest_common.sh@817 -- # '[' -z 4069672 ']' 00:07:11.429 06:33:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.429 06:33:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:11.429 06:33:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.429 06:33:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:11.429 06:33:15 -- common/autotest_common.sh@10 -- # set +x 00:07:11.429 [2024-04-17 06:33:15.912349] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:11.429 [2024-04-17 06:33:15.912427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4069672 ] 00:07:11.429 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.429 [2024-04-17 06:33:16.003681] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 4069668 has claimed it. 00:07:11.429 [2024-04-17 06:33:16.003740] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:11.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (4069672) - No such process 00:07:11.995 ERROR: process (pid: 4069672) is no longer running 00:07:11.995 06:33:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:11.995 06:33:16 -- common/autotest_common.sh@850 -- # return 1 00:07:11.995 06:33:16 -- common/autotest_common.sh@641 -- # es=1 00:07:11.995 06:33:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:11.995 06:33:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:11.995 06:33:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:11.995 06:33:16 -- event/cpu_locks.sh@122 -- # locks_exist 4069668 00:07:11.995 06:33:16 -- event/cpu_locks.sh@22 -- # lslocks -p 4069668 00:07:11.995 06:33:16 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:12.561 lslocks: write error 00:07:12.561 06:33:17 -- event/cpu_locks.sh@124 -- # killprocess 4069668 00:07:12.561 06:33:17 -- common/autotest_common.sh@936 -- # '[' -z 4069668 ']' 00:07:12.561 06:33:17 -- common/autotest_common.sh@940 -- # kill -0 4069668 00:07:12.561 06:33:17 -- common/autotest_common.sh@941 -- # uname 00:07:12.561 06:33:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:12.561 06:33:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4069668 00:07:12.561 06:33:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:12.561 06:33:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:12.561 06:33:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4069668' 00:07:12.561 killing process with pid 4069668 00:07:12.561 06:33:17 -- common/autotest_common.sh@955 -- # kill 4069668 00:07:12.561 06:33:17 -- common/autotest_common.sh@960 -- # wait 4069668 00:07:13.127 00:07:13.127 real 0m2.093s 00:07:13.127 user 0m2.246s 00:07:13.127 sys 0m0.665s 00:07:13.127 06:33:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:13.127 06:33:17 -- common/autotest_common.sh@10 -- # set +x 00:07:13.127 ************************************ 00:07:13.127 END TEST locking_app_on_locked_coremask 00:07:13.127 ************************************ 00:07:13.127 06:33:17 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:13.127 06:33:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:13.127 06:33:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.127 06:33:17 -- common/autotest_common.sh@10 -- # set +x 00:07:13.127 ************************************ 00:07:13.127 START TEST locking_overlapped_coremask 00:07:13.127 ************************************ 00:07:13.127 06:33:17 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:07:13.127 06:33:17 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=4069970 00:07:13.127 06:33:17 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:13.127 06:33:17 -- event/cpu_locks.sh@133 -- # waitforlisten 4069970 /var/tmp/spdk.sock 00:07:13.127 06:33:17 -- common/autotest_common.sh@817 -- # '[' -z 4069970 ']' 00:07:13.127 06:33:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.127 06:33:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:13.127 06:33:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.127 06:33:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:13.127 06:33:17 -- common/autotest_common.sh@10 -- # set +x 00:07:13.127 [2024-04-17 06:33:17.684710] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:13.127 [2024-04-17 06:33:17.684804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4069970 ] 00:07:13.127 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.385 [2024-04-17 06:33:17.747531] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:13.385 [2024-04-17 06:33:17.836957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.385 [2024-04-17 06:33:17.837024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.385 [2024-04-17 06:33:17.837026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.643 06:33:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:13.643 06:33:18 -- common/autotest_common.sh@850 -- # return 0 00:07:13.643 06:33:18 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=4069978 00:07:13.643 06:33:18 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:13.643 06:33:18 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 4069978 /var/tmp/spdk2.sock 00:07:13.643 06:33:18 -- common/autotest_common.sh@638 -- # local es=0 00:07:13.643 06:33:18 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 4069978 /var/tmp/spdk2.sock 00:07:13.643 06:33:18 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:07:13.643 06:33:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:13.643 06:33:18 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:07:13.643 06:33:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:13.643 06:33:18 -- common/autotest_common.sh@641 -- # waitforlisten 4069978 /var/tmp/spdk2.sock 00:07:13.643 06:33:18 -- common/autotest_common.sh@817 -- # '[' -z 4069978 ']' 00:07:13.643 06:33:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:13.643 06:33:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:13.643 06:33:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:13.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:13.643 06:33:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:13.643 06:33:18 -- common/autotest_common.sh@10 -- # set +x 00:07:13.643 [2024-04-17 06:33:18.138509] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:13.643 [2024-04-17 06:33:18.138608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4069978 ] 00:07:13.643 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.643 [2024-04-17 06:33:18.227777] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4069970 has claimed it. 00:07:13.643 [2024-04-17 06:33:18.227844] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:14.576 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (4069978) - No such process 00:07:14.576 ERROR: process (pid: 4069978) is no longer running 00:07:14.576 06:33:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:14.576 06:33:18 -- common/autotest_common.sh@850 -- # return 1 00:07:14.576 06:33:18 -- common/autotest_common.sh@641 -- # es=1 00:07:14.576 06:33:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:14.576 06:33:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:14.576 06:33:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:14.576 06:33:18 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:14.576 06:33:18 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:14.576 06:33:18 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:14.576 06:33:18 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:14.576 06:33:18 -- event/cpu_locks.sh@141 -- # killprocess 4069970 00:07:14.576 06:33:18 -- common/autotest_common.sh@936 -- # '[' -z 4069970 ']' 00:07:14.576 06:33:18 -- common/autotest_common.sh@940 -- # kill -0 4069970 00:07:14.576 06:33:18 -- common/autotest_common.sh@941 -- # uname 00:07:14.576 06:33:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:14.576 06:33:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4069970 00:07:14.576 06:33:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:14.576 06:33:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:14.577 06:33:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4069970' 00:07:14.577 killing process with pid 4069970 00:07:14.577 06:33:18 -- common/autotest_common.sh@955 -- # kill 4069970 00:07:14.577 06:33:18 -- common/autotest_common.sh@960 -- # wait 4069970 00:07:14.835 00:07:14.835 real 0m1.622s 00:07:14.835 user 0m4.357s 00:07:14.835 sys 0m0.447s 00:07:14.835 06:33:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:14.835 06:33:19 -- common/autotest_common.sh@10 -- # set +x 00:07:14.835 ************************************ 00:07:14.835 END TEST locking_overlapped_coremask 00:07:14.835 ************************************ 00:07:14.835 06:33:19 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:14.835 06:33:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:14.835 06:33:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.835 06:33:19 -- common/autotest_common.sh@10 -- # set +x 00:07:14.835 ************************************ 00:07:14.835 START TEST locking_overlapped_coremask_via_rpc 00:07:14.835 ************************************ 00:07:14.835 06:33:19 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:07:14.835 06:33:19 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=4070147 00:07:14.835 06:33:19 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:14.835 06:33:19 -- event/cpu_locks.sh@149 -- # waitforlisten 4070147 /var/tmp/spdk.sock 00:07:14.835 06:33:19 -- common/autotest_common.sh@817 -- # '[' -z 4070147 ']' 00:07:14.835 06:33:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.835 06:33:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:14.835 06:33:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.835 06:33:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:14.835 06:33:19 -- common/autotest_common.sh@10 -- # set +x 00:07:14.835 [2024-04-17 06:33:19.423260] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:14.835 [2024-04-17 06:33:19.423339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4070147 ] 00:07:15.094 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.094 [2024-04-17 06:33:19.483824] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:15.094 [2024-04-17 06:33:19.483861] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.094 [2024-04-17 06:33:19.571699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.094 [2024-04-17 06:33:19.571756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.094 [2024-04-17 06:33:19.571759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.351 06:33:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:15.351 06:33:19 -- common/autotest_common.sh@850 -- # return 0 00:07:15.351 06:33:19 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=4070228 00:07:15.351 06:33:19 -- event/cpu_locks.sh@153 -- # waitforlisten 4070228 /var/tmp/spdk2.sock 00:07:15.351 06:33:19 -- common/autotest_common.sh@817 -- # '[' -z 4070228 ']' 00:07:15.351 06:33:19 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:15.351 06:33:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:15.351 06:33:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:15.351 06:33:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:15.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:15.351 06:33:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:15.351 06:33:19 -- common/autotest_common.sh@10 -- # set +x 00:07:15.351 [2024-04-17 06:33:19.875799] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:15.351 [2024-04-17 06:33:19.875877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4070228 ] 00:07:15.351 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.610 [2024-04-17 06:33:19.966493] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:15.610 [2024-04-17 06:33:19.966529] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.610 [2024-04-17 06:33:20.158393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.610 [2024-04-17 06:33:20.158513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:15.610 [2024-04-17 06:33:20.158515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.544 06:33:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:16.544 06:33:20 -- common/autotest_common.sh@850 -- # return 0 00:07:16.544 06:33:20 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:16.544 06:33:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:16.544 06:33:20 -- common/autotest_common.sh@10 -- # set +x 00:07:16.544 06:33:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:16.544 06:33:20 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:16.544 06:33:20 -- common/autotest_common.sh@638 -- # local es=0 00:07:16.544 06:33:20 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:16.544 06:33:20 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:07:16.544 06:33:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:16.544 06:33:20 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:07:16.544 06:33:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:16.544 06:33:20 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:16.544 06:33:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:16.544 06:33:20 -- common/autotest_common.sh@10 -- # set +x 00:07:16.544 [2024-04-17 06:33:20.817264] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4070147 has claimed it. 00:07:16.544 request: 00:07:16.544 { 00:07:16.544 "method": "framework_enable_cpumask_locks", 00:07:16.544 "req_id": 1 00:07:16.544 } 00:07:16.544 Got JSON-RPC error response 00:07:16.544 response: 00:07:16.544 { 00:07:16.544 "code": -32603, 00:07:16.544 "message": "Failed to claim CPU core: 2" 00:07:16.544 } 00:07:16.544 06:33:20 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:07:16.544 06:33:20 -- common/autotest_common.sh@641 -- # es=1 00:07:16.544 06:33:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:16.544 06:33:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:16.544 06:33:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:16.544 06:33:20 -- event/cpu_locks.sh@158 -- # waitforlisten 4070147 /var/tmp/spdk.sock 00:07:16.544 06:33:20 -- common/autotest_common.sh@817 -- # '[' -z 4070147 ']' 00:07:16.544 06:33:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.544 06:33:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:16.544 06:33:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.544 06:33:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:16.544 06:33:20 -- common/autotest_common.sh@10 -- # set +x 00:07:16.544 06:33:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:16.544 06:33:21 -- common/autotest_common.sh@850 -- # return 0 00:07:16.544 06:33:21 -- event/cpu_locks.sh@159 -- # waitforlisten 4070228 /var/tmp/spdk2.sock 00:07:16.544 06:33:21 -- common/autotest_common.sh@817 -- # '[' -z 4070228 ']' 00:07:16.544 06:33:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:16.544 06:33:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:16.544 06:33:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:16.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:16.544 06:33:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:16.544 06:33:21 -- common/autotest_common.sh@10 -- # set +x 00:07:16.834 06:33:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:16.834 06:33:21 -- common/autotest_common.sh@850 -- # return 0 00:07:16.834 06:33:21 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:16.834 06:33:21 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:16.834 06:33:21 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:16.834 06:33:21 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:16.834 00:07:16.834 real 0m1.935s 00:07:16.834 user 0m0.992s 00:07:16.834 sys 0m0.179s 00:07:16.834 06:33:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:16.834 06:33:21 -- common/autotest_common.sh@10 -- # set +x 00:07:16.834 ************************************ 00:07:16.834 END TEST locking_overlapped_coremask_via_rpc 00:07:16.834 ************************************ 00:07:16.834 06:33:21 -- event/cpu_locks.sh@174 -- # cleanup 00:07:16.834 06:33:21 -- event/cpu_locks.sh@15 -- # [[ -z 4070147 ]] 00:07:16.834 06:33:21 -- event/cpu_locks.sh@15 -- # killprocess 4070147 00:07:16.834 06:33:21 -- common/autotest_common.sh@936 -- # '[' -z 4070147 ']' 00:07:16.834 06:33:21 -- common/autotest_common.sh@940 -- # kill -0 4070147 00:07:16.834 06:33:21 -- common/autotest_common.sh@941 -- # uname 00:07:16.834 06:33:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:16.834 06:33:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4070147 00:07:16.834 06:33:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:16.834 06:33:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:16.834 06:33:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4070147' 00:07:16.834 killing process with pid 4070147 00:07:16.834 06:33:21 -- common/autotest_common.sh@955 -- # kill 4070147 00:07:16.834 06:33:21 -- common/autotest_common.sh@960 -- # wait 4070147 00:07:17.407 06:33:21 -- event/cpu_locks.sh@16 -- # [[ -z 4070228 ]] 00:07:17.407 06:33:21 -- event/cpu_locks.sh@16 -- # killprocess 4070228 00:07:17.407 06:33:21 -- common/autotest_common.sh@936 -- # '[' -z 4070228 ']' 00:07:17.407 06:33:21 -- common/autotest_common.sh@940 -- # kill -0 4070228 00:07:17.407 06:33:21 -- common/autotest_common.sh@941 -- # uname 00:07:17.407 06:33:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:17.407 06:33:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4070228 00:07:17.407 06:33:21 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:07:17.407 06:33:21 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:07:17.407 06:33:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4070228' 00:07:17.407 killing process with pid 4070228 00:07:17.407 06:33:21 -- common/autotest_common.sh@955 -- # kill 4070228 00:07:17.407 06:33:21 -- common/autotest_common.sh@960 -- # wait 4070228 00:07:17.665 06:33:22 -- event/cpu_locks.sh@18 -- # rm -f 00:07:17.665 06:33:22 -- event/cpu_locks.sh@1 -- # cleanup 00:07:17.665 06:33:22 -- event/cpu_locks.sh@15 -- # [[ -z 4070147 ]] 00:07:17.665 06:33:22 -- event/cpu_locks.sh@15 -- # killprocess 4070147 00:07:17.665 06:33:22 -- common/autotest_common.sh@936 -- # '[' -z 4070147 ']' 00:07:17.665 06:33:22 -- common/autotest_common.sh@940 -- # kill -0 4070147 00:07:17.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (4070147) - No such process 00:07:17.665 06:33:22 -- common/autotest_common.sh@963 -- # echo 'Process with pid 4070147 is not found' 00:07:17.665 Process with pid 4070147 is not found 00:07:17.665 06:33:22 -- event/cpu_locks.sh@16 -- # [[ -z 4070228 ]] 00:07:17.665 06:33:22 -- event/cpu_locks.sh@16 -- # killprocess 4070228 00:07:17.665 06:33:22 -- common/autotest_common.sh@936 -- # '[' -z 4070228 ']' 00:07:17.665 06:33:22 -- common/autotest_common.sh@940 -- # kill -0 4070228 00:07:17.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (4070228) - No such process 00:07:17.665 06:33:22 -- common/autotest_common.sh@963 -- # echo 'Process with pid 4070228 is not found' 00:07:17.665 Process with pid 4070228 is not found 00:07:17.665 06:33:22 -- event/cpu_locks.sh@18 -- # rm -f 00:07:17.665 00:07:17.665 real 0m16.219s 00:07:17.665 user 0m27.353s 00:07:17.665 sys 0m5.615s 00:07:17.665 06:33:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:17.665 06:33:22 -- common/autotest_common.sh@10 -- # set +x 00:07:17.665 ************************************ 00:07:17.665 END TEST cpu_locks 00:07:17.665 ************************************ 00:07:17.665 00:07:17.665 real 0m43.037s 00:07:17.665 user 1m21.273s 00:07:17.665 sys 0m10.010s 00:07:17.665 06:33:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:17.665 06:33:22 -- common/autotest_common.sh@10 -- # set +x 00:07:17.665 ************************************ 00:07:17.665 END TEST event 00:07:17.665 ************************************ 00:07:17.665 06:33:22 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:17.665 06:33:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:17.665 06:33:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.665 06:33:22 -- common/autotest_common.sh@10 -- # set +x 00:07:17.923 ************************************ 00:07:17.923 START TEST thread 00:07:17.923 ************************************ 00:07:17.923 06:33:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:17.923 * Looking for test storage... 00:07:17.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:17.923 06:33:22 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:17.923 06:33:22 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:17.923 06:33:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.923 06:33:22 -- common/autotest_common.sh@10 -- # set +x 00:07:17.923 ************************************ 00:07:17.923 START TEST thread_poller_perf 00:07:17.923 ************************************ 00:07:17.923 06:33:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:17.923 [2024-04-17 06:33:22.503541] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:17.923 [2024-04-17 06:33:22.503601] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4070667 ] 00:07:18.181 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.181 [2024-04-17 06:33:22.564690] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.181 [2024-04-17 06:33:22.654169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.181 [2024-04-17 06:33:22.654283] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:18.181 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:19.554 ====================================== 00:07:19.554 busy:2709721203 (cyc) 00:07:19.554 total_run_count: 292000 00:07:19.554 tsc_hz: 2700000000 (cyc) 00:07:19.554 ====================================== 00:07:19.554 poller_cost: 9279 (cyc), 3436 (nsec) 00:07:19.554 00:07:19.554 real 0m1.251s 00:07:19.554 user 0m1.165s 00:07:19.554 sys 0m0.080s 00:07:19.554 06:33:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:19.554 06:33:23 -- common/autotest_common.sh@10 -- # set +x 00:07:19.554 ************************************ 00:07:19.554 END TEST thread_poller_perf 00:07:19.554 ************************************ 00:07:19.554 06:33:23 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:19.554 06:33:23 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:19.554 06:33:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.554 06:33:23 -- common/autotest_common.sh@10 -- # set +x 00:07:19.554 ************************************ 00:07:19.554 START TEST thread_poller_perf 00:07:19.554 ************************************ 00:07:19.554 06:33:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:19.554 [2024-04-17 06:33:23.884825] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:19.554 [2024-04-17 06:33:23.884885] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4070827 ] 00:07:19.554 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.554 [2024-04-17 06:33:23.949004] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.554 [2024-04-17 06:33:24.037375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.554 [2024-04-17 06:33:24.037505] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:19.554 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:20.928 ====================================== 00:07:20.928 busy:2702926764 (cyc) 00:07:20.928 total_run_count: 3941000 00:07:20.928 tsc_hz: 2700000000 (cyc) 00:07:20.928 ====================================== 00:07:20.928 poller_cost: 685 (cyc), 253 (nsec) 00:07:20.928 00:07:20.928 real 0m1.249s 00:07:20.928 user 0m1.163s 00:07:20.928 sys 0m0.080s 00:07:20.928 06:33:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:20.928 06:33:25 -- common/autotest_common.sh@10 -- # set +x 00:07:20.928 ************************************ 00:07:20.928 END TEST thread_poller_perf 00:07:20.928 ************************************ 00:07:20.928 06:33:25 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:20.928 00:07:20.928 real 0m2.799s 00:07:20.928 user 0m2.440s 00:07:20.928 sys 0m0.333s 00:07:20.928 06:33:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:20.928 06:33:25 -- common/autotest_common.sh@10 -- # set +x 00:07:20.928 ************************************ 00:07:20.928 END TEST thread 00:07:20.928 ************************************ 00:07:20.928 06:33:25 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:20.928 06:33:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:20.928 06:33:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.928 06:33:25 -- common/autotest_common.sh@10 -- # set +x 00:07:20.928 ************************************ 00:07:20.928 START TEST accel 00:07:20.928 ************************************ 00:07:20.928 06:33:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:20.928 * Looking for test storage... 00:07:20.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:20.928 06:33:25 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:20.928 06:33:25 -- accel/accel.sh@82 -- # get_expected_opcs 00:07:20.928 06:33:25 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:20.928 06:33:25 -- accel/accel.sh@62 -- # spdk_tgt_pid=4071027 00:07:20.928 06:33:25 -- accel/accel.sh@63 -- # waitforlisten 4071027 00:07:20.928 06:33:25 -- common/autotest_common.sh@817 -- # '[' -z 4071027 ']' 00:07:20.928 06:33:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.928 06:33:25 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:20.928 06:33:25 -- accel/accel.sh@61 -- # build_accel_config 00:07:20.928 06:33:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:20.928 06:33:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.928 06:33:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.928 06:33:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.928 06:33:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:20.928 06:33:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.928 06:33:25 -- common/autotest_common.sh@10 -- # set +x 00:07:20.928 06:33:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.928 06:33:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.928 06:33:25 -- accel/accel.sh@40 -- # local IFS=, 00:07:20.928 06:33:25 -- accel/accel.sh@41 -- # jq -r . 00:07:20.928 [2024-04-17 06:33:25.369969] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:20.928 [2024-04-17 06:33:25.370061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4071027 ] 00:07:20.928 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.928 [2024-04-17 06:33:25.433610] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.928 [2024-04-17 06:33:25.520090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.186 06:33:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:21.186 06:33:25 -- common/autotest_common.sh@850 -- # return 0 00:07:21.186 06:33:25 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:21.186 06:33:25 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:21.186 06:33:25 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:21.186 06:33:25 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:21.186 06:33:25 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:21.186 06:33:25 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:21.186 06:33:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:21.186 06:33:25 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:21.186 06:33:25 -- common/autotest_common.sh@10 -- # set +x 00:07:21.186 06:33:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:21.445 06:33:25 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.445 06:33:25 -- accel/accel.sh@72 -- # IFS== 00:07:21.445 06:33:25 -- accel/accel.sh@72 -- # read -r opc module 00:07:21.445 06:33:25 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.445 06:33:25 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.445 06:33:25 -- accel/accel.sh@72 -- # IFS== 00:07:21.445 06:33:25 -- accel/accel.sh@72 -- # read -r opc module 00:07:21.445 06:33:25 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.445 06:33:25 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.445 06:33:25 -- accel/accel.sh@72 -- # IFS== 00:07:21.445 06:33:25 -- accel/accel.sh@72 -- # read -r opc module 00:07:21.445 06:33:25 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.445 06:33:25 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.445 06:33:25 -- accel/accel.sh@72 -- # IFS== 00:07:21.445 06:33:25 -- accel/accel.sh@72 -- # read -r opc module 00:07:21.445 06:33:25 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.445 06:33:25 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.445 06:33:25 -- accel/accel.sh@72 -- # IFS== 00:07:21.445 06:33:25 -- accel/accel.sh@72 -- # read -r opc module 00:07:21.445 06:33:25 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.445 06:33:25 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.445 06:33:25 -- accel/accel.sh@72 -- # IFS== 00:07:21.445 06:33:25 -- accel/accel.sh@72 -- # read -r opc module 00:07:21.445 06:33:25 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.445 06:33:25 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.445 06:33:25 -- accel/accel.sh@72 -- # IFS== 00:07:21.445 06:33:25 -- accel/accel.sh@72 -- # read -r opc module 00:07:21.445 06:33:25 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.445 06:33:25 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.445 06:33:25 -- accel/accel.sh@72 -- # IFS== 00:07:21.445 06:33:25 -- accel/accel.sh@72 -- # read -r opc module 00:07:21.445 06:33:25 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.445 06:33:25 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.445 06:33:25 -- accel/accel.sh@72 -- # IFS== 00:07:21.445 06:33:25 -- accel/accel.sh@72 -- # read -r opc module 00:07:21.445 06:33:25 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.445 06:33:25 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.445 06:33:25 -- accel/accel.sh@72 -- # IFS== 00:07:21.445 06:33:25 -- accel/accel.sh@72 -- # read -r opc module 00:07:21.445 06:33:25 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.445 06:33:25 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.445 06:33:25 -- accel/accel.sh@72 -- # IFS== 00:07:21.445 06:33:25 -- accel/accel.sh@72 -- # read -r opc module 00:07:21.445 06:33:25 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.445 06:33:25 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.445 06:33:25 -- accel/accel.sh@72 -- # IFS== 00:07:21.445 06:33:25 -- accel/accel.sh@72 -- # read -r opc module 00:07:21.445 06:33:25 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.445 06:33:25 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.445 06:33:25 -- accel/accel.sh@72 -- # IFS== 00:07:21.445 06:33:25 -- accel/accel.sh@72 -- # read -r opc module 00:07:21.445 06:33:25 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.445 06:33:25 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.445 06:33:25 -- accel/accel.sh@72 -- # IFS== 00:07:21.445 06:33:25 -- accel/accel.sh@72 -- # read -r opc module 00:07:21.445 06:33:25 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.445 06:33:25 -- accel/accel.sh@75 -- # killprocess 4071027 00:07:21.445 06:33:25 -- common/autotest_common.sh@936 -- # '[' -z 4071027 ']' 00:07:21.445 06:33:25 -- common/autotest_common.sh@940 -- # kill -0 4071027 00:07:21.445 06:33:25 -- common/autotest_common.sh@941 -- # uname 00:07:21.445 06:33:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:21.445 06:33:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4071027 00:07:21.445 06:33:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:21.445 06:33:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:21.445 06:33:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4071027' 00:07:21.445 killing process with pid 4071027 00:07:21.445 06:33:25 -- common/autotest_common.sh@955 -- # kill 4071027 00:07:21.445 06:33:25 -- common/autotest_common.sh@960 -- # wait 4071027 00:07:21.710 06:33:26 -- accel/accel.sh@76 -- # trap - ERR 00:07:21.711 06:33:26 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:21.711 06:33:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:21.711 06:33:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:21.711 06:33:26 -- common/autotest_common.sh@10 -- # set +x 00:07:21.970 06:33:26 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:07:21.970 06:33:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:21.970 06:33:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.970 06:33:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.970 06:33:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.970 06:33:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.970 06:33:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.970 06:33:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.970 06:33:26 -- accel/accel.sh@40 -- # local IFS=, 00:07:21.970 06:33:26 -- accel/accel.sh@41 -- # jq -r . 00:07:21.970 06:33:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:21.970 06:33:26 -- common/autotest_common.sh@10 -- # set +x 00:07:21.970 06:33:26 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:21.970 06:33:26 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:21.970 06:33:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:21.970 06:33:26 -- common/autotest_common.sh@10 -- # set +x 00:07:21.970 ************************************ 00:07:21.970 START TEST accel_missing_filename 00:07:21.970 ************************************ 00:07:21.970 06:33:26 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:07:21.970 06:33:26 -- common/autotest_common.sh@638 -- # local es=0 00:07:21.970 06:33:26 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:21.970 06:33:26 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:21.970 06:33:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:21.970 06:33:26 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:21.970 06:33:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:21.970 06:33:26 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:07:21.970 06:33:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:21.970 06:33:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.970 06:33:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.970 06:33:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.970 06:33:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.970 06:33:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.970 06:33:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.970 06:33:26 -- accel/accel.sh@40 -- # local IFS=, 00:07:21.970 06:33:26 -- accel/accel.sh@41 -- # jq -r . 00:07:21.970 [2024-04-17 06:33:26.503288] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:21.970 [2024-04-17 06:33:26.503349] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4071227 ] 00:07:21.970 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.970 [2024-04-17 06:33:26.566611] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.228 [2024-04-17 06:33:26.656861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.228 [2024-04-17 06:33:26.657505] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:22.228 [2024-04-17 06:33:26.717236] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:22.228 [2024-04-17 06:33:26.800524] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:07:22.487 A filename is required. 00:07:22.487 06:33:26 -- common/autotest_common.sh@641 -- # es=234 00:07:22.487 06:33:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:22.487 06:33:26 -- common/autotest_common.sh@650 -- # es=106 00:07:22.487 06:33:26 -- common/autotest_common.sh@651 -- # case "$es" in 00:07:22.487 06:33:26 -- common/autotest_common.sh@658 -- # es=1 00:07:22.487 06:33:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:22.487 00:07:22.487 real 0m0.394s 00:07:22.487 user 0m0.289s 00:07:22.487 sys 0m0.140s 00:07:22.487 06:33:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:22.487 06:33:26 -- common/autotest_common.sh@10 -- # set +x 00:07:22.487 ************************************ 00:07:22.487 END TEST accel_missing_filename 00:07:22.487 ************************************ 00:07:22.487 06:33:26 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:22.487 06:33:26 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:22.487 06:33:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.487 06:33:26 -- common/autotest_common.sh@10 -- # set +x 00:07:22.487 ************************************ 00:07:22.487 START TEST accel_compress_verify 00:07:22.487 ************************************ 00:07:22.487 06:33:26 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:22.487 06:33:26 -- common/autotest_common.sh@638 -- # local es=0 00:07:22.487 06:33:26 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:22.487 06:33:26 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:22.487 06:33:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:22.487 06:33:26 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:22.487 06:33:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:22.487 06:33:27 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:22.487 06:33:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:22.487 06:33:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.487 06:33:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.487 06:33:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.487 06:33:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.487 06:33:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.487 06:33:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.487 06:33:27 -- accel/accel.sh@40 -- # local IFS=, 00:07:22.487 06:33:27 -- accel/accel.sh@41 -- # jq -r . 00:07:22.487 [2024-04-17 06:33:27.017569] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:22.487 [2024-04-17 06:33:27.017635] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4071368 ] 00:07:22.487 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.487 [2024-04-17 06:33:27.080093] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.745 [2024-04-17 06:33:27.172249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.745 [2024-04-17 06:33:27.172920] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:22.745 [2024-04-17 06:33:27.234405] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:22.745 [2024-04-17 06:33:27.318224] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:07:23.004 00:07:23.004 Compression does not support the verify option, aborting. 00:07:23.004 06:33:27 -- common/autotest_common.sh@641 -- # es=161 00:07:23.004 06:33:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:23.004 06:33:27 -- common/autotest_common.sh@650 -- # es=33 00:07:23.004 06:33:27 -- common/autotest_common.sh@651 -- # case "$es" in 00:07:23.004 06:33:27 -- common/autotest_common.sh@658 -- # es=1 00:07:23.004 06:33:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:23.004 00:07:23.004 real 0m0.401s 00:07:23.004 user 0m0.294s 00:07:23.004 sys 0m0.142s 00:07:23.004 06:33:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:23.004 06:33:27 -- common/autotest_common.sh@10 -- # set +x 00:07:23.004 ************************************ 00:07:23.004 END TEST accel_compress_verify 00:07:23.004 ************************************ 00:07:23.004 06:33:27 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:23.004 06:33:27 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:23.004 06:33:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.004 06:33:27 -- common/autotest_common.sh@10 -- # set +x 00:07:23.004 ************************************ 00:07:23.004 START TEST accel_wrong_workload 00:07:23.004 ************************************ 00:07:23.004 06:33:27 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:07:23.004 06:33:27 -- common/autotest_common.sh@638 -- # local es=0 00:07:23.004 06:33:27 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:23.004 06:33:27 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:23.005 06:33:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:23.005 06:33:27 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:23.005 06:33:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:23.005 06:33:27 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:07:23.005 06:33:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:23.005 06:33:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.005 06:33:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.005 06:33:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.005 06:33:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.005 06:33:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.005 06:33:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.005 06:33:27 -- accel/accel.sh@40 -- # local IFS=, 00:07:23.005 06:33:27 -- accel/accel.sh@41 -- # jq -r . 00:07:23.005 Unsupported workload type: foobar 00:07:23.005 [2024-04-17 06:33:27.540836] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:23.005 accel_perf options: 00:07:23.005 [-h help message] 00:07:23.005 [-q queue depth per core] 00:07:23.005 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:23.005 [-T number of threads per core 00:07:23.005 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:23.005 [-t time in seconds] 00:07:23.005 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:23.005 [ dif_verify, , dif_generate, dif_generate_copy 00:07:23.005 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:23.005 [-l for compress/decompress workloads, name of uncompressed input file 00:07:23.005 [-S for crc32c workload, use this seed value (default 0) 00:07:23.005 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:23.005 [-f for fill workload, use this BYTE value (default 255) 00:07:23.005 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:23.005 [-y verify result if this switch is on] 00:07:23.005 [-a tasks to allocate per core (default: same value as -q)] 00:07:23.005 Can be used to spread operations across a wider range of memory. 00:07:23.005 06:33:27 -- common/autotest_common.sh@641 -- # es=1 00:07:23.005 06:33:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:23.005 06:33:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:23.005 06:33:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:23.005 00:07:23.005 real 0m0.025s 00:07:23.005 user 0m0.011s 00:07:23.005 sys 0m0.014s 00:07:23.005 06:33:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:23.005 06:33:27 -- common/autotest_common.sh@10 -- # set +x 00:07:23.005 ************************************ 00:07:23.005 END TEST accel_wrong_workload 00:07:23.005 ************************************ 00:07:23.005 Error: writing output failed: Broken pipe 00:07:23.005 06:33:27 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:23.005 06:33:27 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:23.005 06:33:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.005 06:33:27 -- common/autotest_common.sh@10 -- # set +x 00:07:23.264 ************************************ 00:07:23.264 START TEST accel_negative_buffers 00:07:23.264 ************************************ 00:07:23.264 06:33:27 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:23.264 06:33:27 -- common/autotest_common.sh@638 -- # local es=0 00:07:23.264 06:33:27 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:23.264 06:33:27 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:23.264 06:33:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:23.264 06:33:27 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:23.264 06:33:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:23.264 06:33:27 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:07:23.264 06:33:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:23.264 06:33:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.264 06:33:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.264 06:33:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.264 06:33:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.264 06:33:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.264 06:33:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.264 06:33:27 -- accel/accel.sh@40 -- # local IFS=, 00:07:23.264 06:33:27 -- accel/accel.sh@41 -- # jq -r . 00:07:23.264 -x option must be non-negative. 00:07:23.264 [2024-04-17 06:33:27.678289] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:23.264 accel_perf options: 00:07:23.264 [-h help message] 00:07:23.264 [-q queue depth per core] 00:07:23.264 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:23.264 [-T number of threads per core 00:07:23.264 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:23.264 [-t time in seconds] 00:07:23.264 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:23.264 [ dif_verify, , dif_generate, dif_generate_copy 00:07:23.264 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:23.264 [-l for compress/decompress workloads, name of uncompressed input file 00:07:23.264 [-S for crc32c workload, use this seed value (default 0) 00:07:23.264 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:23.264 [-f for fill workload, use this BYTE value (default 255) 00:07:23.264 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:23.264 [-y verify result if this switch is on] 00:07:23.264 [-a tasks to allocate per core (default: same value as -q)] 00:07:23.264 Can be used to spread operations across a wider range of memory. 00:07:23.264 06:33:27 -- common/autotest_common.sh@641 -- # es=1 00:07:23.264 06:33:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:23.264 06:33:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:23.264 06:33:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:23.264 00:07:23.264 real 0m0.021s 00:07:23.264 user 0m0.012s 00:07:23.264 sys 0m0.009s 00:07:23.264 06:33:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:23.264 06:33:27 -- common/autotest_common.sh@10 -- # set +x 00:07:23.264 ************************************ 00:07:23.264 END TEST accel_negative_buffers 00:07:23.264 ************************************ 00:07:23.264 Error: writing output failed: Broken pipe 00:07:23.264 06:33:27 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:23.264 06:33:27 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:23.264 06:33:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.264 06:33:27 -- common/autotest_common.sh@10 -- # set +x 00:07:23.264 ************************************ 00:07:23.264 START TEST accel_crc32c 00:07:23.264 ************************************ 00:07:23.264 06:33:27 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:23.264 06:33:27 -- accel/accel.sh@16 -- # local accel_opc 00:07:23.264 06:33:27 -- accel/accel.sh@17 -- # local accel_module 00:07:23.264 06:33:27 -- accel/accel.sh@19 -- # IFS=: 00:07:23.264 06:33:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:23.264 06:33:27 -- accel/accel.sh@19 -- # read -r var val 00:07:23.264 06:33:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:23.264 06:33:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.264 06:33:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.264 06:33:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.264 06:33:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.264 06:33:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.264 06:33:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.264 06:33:27 -- accel/accel.sh@40 -- # local IFS=, 00:07:23.264 06:33:27 -- accel/accel.sh@41 -- # jq -r . 00:07:23.264 [2024-04-17 06:33:27.807926] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:23.264 [2024-04-17 06:33:27.807992] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4071572 ] 00:07:23.264 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.523 [2024-04-17 06:33:27.872588] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.523 [2024-04-17 06:33:27.959999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.523 [2024-04-17 06:33:27.960602] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:23.523 06:33:28 -- accel/accel.sh@20 -- # val= 00:07:23.523 06:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # IFS=: 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # read -r var val 00:07:23.523 06:33:28 -- accel/accel.sh@20 -- # val= 00:07:23.523 06:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # IFS=: 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # read -r var val 00:07:23.523 06:33:28 -- accel/accel.sh@20 -- # val=0x1 00:07:23.523 06:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # IFS=: 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # read -r var val 00:07:23.523 06:33:28 -- accel/accel.sh@20 -- # val= 00:07:23.523 06:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # IFS=: 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # read -r var val 00:07:23.523 06:33:28 -- accel/accel.sh@20 -- # val= 00:07:23.523 06:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # IFS=: 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # read -r var val 00:07:23.523 06:33:28 -- accel/accel.sh@20 -- # val=crc32c 00:07:23.523 06:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.523 06:33:28 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # IFS=: 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # read -r var val 00:07:23.523 06:33:28 -- accel/accel.sh@20 -- # val=32 00:07:23.523 06:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # IFS=: 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # read -r var val 00:07:23.523 06:33:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.523 06:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # IFS=: 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # read -r var val 00:07:23.523 06:33:28 -- accel/accel.sh@20 -- # val= 00:07:23.523 06:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # IFS=: 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # read -r var val 00:07:23.523 06:33:28 -- accel/accel.sh@20 -- # val=software 00:07:23.523 06:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.523 06:33:28 -- accel/accel.sh@22 -- # accel_module=software 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # IFS=: 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # read -r var val 00:07:23.523 06:33:28 -- accel/accel.sh@20 -- # val=32 00:07:23.523 06:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # IFS=: 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # read -r var val 00:07:23.523 06:33:28 -- accel/accel.sh@20 -- # val=32 00:07:23.523 06:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # IFS=: 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # read -r var val 00:07:23.523 06:33:28 -- accel/accel.sh@20 -- # val=1 00:07:23.523 06:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # IFS=: 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # read -r var val 00:07:23.523 06:33:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.523 06:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # IFS=: 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # read -r var val 00:07:23.523 06:33:28 -- accel/accel.sh@20 -- # val=Yes 00:07:23.523 06:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # IFS=: 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # read -r var val 00:07:23.523 06:33:28 -- accel/accel.sh@20 -- # val= 00:07:23.523 06:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # IFS=: 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # read -r var val 00:07:23.523 06:33:28 -- accel/accel.sh@20 -- # val= 00:07:23.523 06:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # IFS=: 00:07:23.523 06:33:28 -- accel/accel.sh@19 -- # read -r var val 00:07:24.898 06:33:29 -- accel/accel.sh@20 -- # val= 00:07:24.898 06:33:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.898 06:33:29 -- accel/accel.sh@19 -- # IFS=: 00:07:24.898 06:33:29 -- accel/accel.sh@19 -- # read -r var val 00:07:24.898 06:33:29 -- accel/accel.sh@20 -- # val= 00:07:24.898 06:33:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.898 06:33:29 -- accel/accel.sh@19 -- # IFS=: 00:07:24.898 06:33:29 -- accel/accel.sh@19 -- # read -r var val 00:07:24.898 06:33:29 -- accel/accel.sh@20 -- # val= 00:07:24.898 06:33:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.898 06:33:29 -- accel/accel.sh@19 -- # IFS=: 00:07:24.898 06:33:29 -- accel/accel.sh@19 -- # read -r var val 00:07:24.898 06:33:29 -- accel/accel.sh@20 -- # val= 00:07:24.898 06:33:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.898 06:33:29 -- accel/accel.sh@19 -- # IFS=: 00:07:24.898 06:33:29 -- accel/accel.sh@19 -- # read -r var val 00:07:24.898 06:33:29 -- accel/accel.sh@20 -- # val= 00:07:24.898 06:33:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.898 06:33:29 -- accel/accel.sh@19 -- # IFS=: 00:07:24.898 06:33:29 -- accel/accel.sh@19 -- # read -r var val 00:07:24.898 06:33:29 -- accel/accel.sh@20 -- # val= 00:07:24.898 06:33:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.898 06:33:29 -- accel/accel.sh@19 -- # IFS=: 00:07:24.898 06:33:29 -- accel/accel.sh@19 -- # read -r var val 00:07:24.898 06:33:29 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.898 06:33:29 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:24.898 06:33:29 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.898 00:07:24.898 real 0m1.388s 00:07:24.898 user 0m1.249s 00:07:24.898 sys 0m0.141s 00:07:24.898 06:33:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:24.898 06:33:29 -- common/autotest_common.sh@10 -- # set +x 00:07:24.898 ************************************ 00:07:24.898 END TEST accel_crc32c 00:07:24.898 ************************************ 00:07:24.898 06:33:29 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:24.898 06:33:29 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:24.898 06:33:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:24.898 06:33:29 -- common/autotest_common.sh@10 -- # set +x 00:07:24.898 ************************************ 00:07:24.898 START TEST accel_crc32c_C2 00:07:24.898 ************************************ 00:07:24.898 06:33:29 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:24.898 06:33:29 -- accel/accel.sh@16 -- # local accel_opc 00:07:24.898 06:33:29 -- accel/accel.sh@17 -- # local accel_module 00:07:24.898 06:33:29 -- accel/accel.sh@19 -- # IFS=: 00:07:24.898 06:33:29 -- accel/accel.sh@19 -- # read -r var val 00:07:24.898 06:33:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:24.898 06:33:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:24.898 06:33:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.898 06:33:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.898 06:33:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.898 06:33:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.898 06:33:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.898 06:33:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.898 06:33:29 -- accel/accel.sh@40 -- # local IFS=, 00:07:24.898 06:33:29 -- accel/accel.sh@41 -- # jq -r . 00:07:24.898 [2024-04-17 06:33:29.319422] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:24.898 [2024-04-17 06:33:29.319482] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4071736 ] 00:07:24.898 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.898 [2024-04-17 06:33:29.382866] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.898 [2024-04-17 06:33:29.472830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.898 [2024-04-17 06:33:29.473538] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:25.157 06:33:29 -- accel/accel.sh@20 -- # val= 00:07:25.157 06:33:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # IFS=: 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # read -r var val 00:07:25.157 06:33:29 -- accel/accel.sh@20 -- # val= 00:07:25.157 06:33:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # IFS=: 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # read -r var val 00:07:25.157 06:33:29 -- accel/accel.sh@20 -- # val=0x1 00:07:25.157 06:33:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # IFS=: 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # read -r var val 00:07:25.157 06:33:29 -- accel/accel.sh@20 -- # val= 00:07:25.157 06:33:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # IFS=: 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # read -r var val 00:07:25.157 06:33:29 -- accel/accel.sh@20 -- # val= 00:07:25.157 06:33:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # IFS=: 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # read -r var val 00:07:25.157 06:33:29 -- accel/accel.sh@20 -- # val=crc32c 00:07:25.157 06:33:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.157 06:33:29 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # IFS=: 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # read -r var val 00:07:25.157 06:33:29 -- accel/accel.sh@20 -- # val=0 00:07:25.157 06:33:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # IFS=: 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # read -r var val 00:07:25.157 06:33:29 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.157 06:33:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # IFS=: 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # read -r var val 00:07:25.157 06:33:29 -- accel/accel.sh@20 -- # val= 00:07:25.157 06:33:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # IFS=: 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # read -r var val 00:07:25.157 06:33:29 -- accel/accel.sh@20 -- # val=software 00:07:25.157 06:33:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.157 06:33:29 -- accel/accel.sh@22 -- # accel_module=software 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # IFS=: 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # read -r var val 00:07:25.157 06:33:29 -- accel/accel.sh@20 -- # val=32 00:07:25.157 06:33:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # IFS=: 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # read -r var val 00:07:25.157 06:33:29 -- accel/accel.sh@20 -- # val=32 00:07:25.157 06:33:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # IFS=: 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # read -r var val 00:07:25.157 06:33:29 -- accel/accel.sh@20 -- # val=1 00:07:25.157 06:33:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # IFS=: 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # read -r var val 00:07:25.157 06:33:29 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.157 06:33:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # IFS=: 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # read -r var val 00:07:25.157 06:33:29 -- accel/accel.sh@20 -- # val=Yes 00:07:25.157 06:33:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # IFS=: 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # read -r var val 00:07:25.157 06:33:29 -- accel/accel.sh@20 -- # val= 00:07:25.157 06:33:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # IFS=: 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # read -r var val 00:07:25.157 06:33:29 -- accel/accel.sh@20 -- # val= 00:07:25.157 06:33:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # IFS=: 00:07:25.157 06:33:29 -- accel/accel.sh@19 -- # read -r var val 00:07:26.531 06:33:30 -- accel/accel.sh@20 -- # val= 00:07:26.531 06:33:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.531 06:33:30 -- accel/accel.sh@19 -- # IFS=: 00:07:26.531 06:33:30 -- accel/accel.sh@19 -- # read -r var val 00:07:26.531 06:33:30 -- accel/accel.sh@20 -- # val= 00:07:26.531 06:33:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.531 06:33:30 -- accel/accel.sh@19 -- # IFS=: 00:07:26.531 06:33:30 -- accel/accel.sh@19 -- # read -r var val 00:07:26.531 06:33:30 -- accel/accel.sh@20 -- # val= 00:07:26.531 06:33:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.531 06:33:30 -- accel/accel.sh@19 -- # IFS=: 00:07:26.531 06:33:30 -- accel/accel.sh@19 -- # read -r var val 00:07:26.531 06:33:30 -- accel/accel.sh@20 -- # val= 00:07:26.531 06:33:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.531 06:33:30 -- accel/accel.sh@19 -- # IFS=: 00:07:26.531 06:33:30 -- accel/accel.sh@19 -- # read -r var val 00:07:26.531 06:33:30 -- accel/accel.sh@20 -- # val= 00:07:26.531 06:33:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.531 06:33:30 -- accel/accel.sh@19 -- # IFS=: 00:07:26.531 06:33:30 -- accel/accel.sh@19 -- # read -r var val 00:07:26.531 06:33:30 -- accel/accel.sh@20 -- # val= 00:07:26.531 06:33:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.531 06:33:30 -- accel/accel.sh@19 -- # IFS=: 00:07:26.531 06:33:30 -- accel/accel.sh@19 -- # read -r var val 00:07:26.531 06:33:30 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.531 06:33:30 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:26.531 06:33:30 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.531 00:07:26.531 real 0m1.410s 00:07:26.531 user 0m1.265s 00:07:26.531 sys 0m0.147s 00:07:26.531 06:33:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:26.531 06:33:30 -- common/autotest_common.sh@10 -- # set +x 00:07:26.531 ************************************ 00:07:26.531 END TEST accel_crc32c_C2 00:07:26.531 ************************************ 00:07:26.531 06:33:30 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:26.531 06:33:30 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:26.531 06:33:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:26.531 06:33:30 -- common/autotest_common.sh@10 -- # set +x 00:07:26.531 ************************************ 00:07:26.531 START TEST accel_copy 00:07:26.531 ************************************ 00:07:26.531 06:33:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:07:26.531 06:33:30 -- accel/accel.sh@16 -- # local accel_opc 00:07:26.531 06:33:30 -- accel/accel.sh@17 -- # local accel_module 00:07:26.531 06:33:30 -- accel/accel.sh@19 -- # IFS=: 00:07:26.531 06:33:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:26.531 06:33:30 -- accel/accel.sh@19 -- # read -r var val 00:07:26.531 06:33:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:26.531 06:33:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.531 06:33:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.531 06:33:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.531 06:33:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.531 06:33:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.531 06:33:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.531 06:33:30 -- accel/accel.sh@40 -- # local IFS=, 00:07:26.531 06:33:30 -- accel/accel.sh@41 -- # jq -r . 00:07:26.531 [2024-04-17 06:33:30.851490] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:26.531 [2024-04-17 06:33:30.851554] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4071901 ] 00:07:26.531 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.531 [2024-04-17 06:33:30.913314] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.531 [2024-04-17 06:33:31.004702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.531 [2024-04-17 06:33:31.005343] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:26.531 06:33:31 -- accel/accel.sh@20 -- # val= 00:07:26.531 06:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.531 06:33:31 -- accel/accel.sh@19 -- # IFS=: 00:07:26.531 06:33:31 -- accel/accel.sh@19 -- # read -r var val 00:07:26.531 06:33:31 -- accel/accel.sh@20 -- # val= 00:07:26.531 06:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.531 06:33:31 -- accel/accel.sh@19 -- # IFS=: 00:07:26.531 06:33:31 -- accel/accel.sh@19 -- # read -r var val 00:07:26.531 06:33:31 -- accel/accel.sh@20 -- # val=0x1 00:07:26.531 06:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.531 06:33:31 -- accel/accel.sh@19 -- # IFS=: 00:07:26.531 06:33:31 -- accel/accel.sh@19 -- # read -r var val 00:07:26.531 06:33:31 -- accel/accel.sh@20 -- # val= 00:07:26.531 06:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.531 06:33:31 -- accel/accel.sh@19 -- # IFS=: 00:07:26.531 06:33:31 -- accel/accel.sh@19 -- # read -r var val 00:07:26.531 06:33:31 -- accel/accel.sh@20 -- # val= 00:07:26.531 06:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.531 06:33:31 -- accel/accel.sh@19 -- # IFS=: 00:07:26.531 06:33:31 -- accel/accel.sh@19 -- # read -r var val 00:07:26.531 06:33:31 -- accel/accel.sh@20 -- # val=copy 00:07:26.531 06:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.531 06:33:31 -- accel/accel.sh@23 -- # accel_opc=copy 00:07:26.531 06:33:31 -- accel/accel.sh@19 -- # IFS=: 00:07:26.531 06:33:31 -- accel/accel.sh@19 -- # read -r var val 00:07:26.531 06:33:31 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.531 06:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.531 06:33:31 -- accel/accel.sh@19 -- # IFS=: 00:07:26.531 06:33:31 -- accel/accel.sh@19 -- # read -r var val 00:07:26.531 06:33:31 -- accel/accel.sh@20 -- # val= 00:07:26.531 06:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.531 06:33:31 -- accel/accel.sh@19 -- # IFS=: 00:07:26.531 06:33:31 -- accel/accel.sh@19 -- # read -r var val 00:07:26.531 06:33:31 -- accel/accel.sh@20 -- # val=software 00:07:26.531 06:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.531 06:33:31 -- accel/accel.sh@22 -- # accel_module=software 00:07:26.531 06:33:31 -- accel/accel.sh@19 -- # IFS=: 00:07:26.531 06:33:31 -- accel/accel.sh@19 -- # read -r var val 00:07:26.531 06:33:31 -- accel/accel.sh@20 -- # val=32 00:07:26.531 06:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.531 06:33:31 -- accel/accel.sh@19 -- # IFS=: 00:07:26.531 06:33:31 -- accel/accel.sh@19 -- # read -r var val 00:07:26.531 06:33:31 -- accel/accel.sh@20 -- # val=32 00:07:26.531 06:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.531 06:33:31 -- accel/accel.sh@19 -- # IFS=: 00:07:26.531 06:33:31 -- accel/accel.sh@19 -- # read -r var val 00:07:26.531 06:33:31 -- accel/accel.sh@20 -- # val=1 00:07:26.531 06:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.531 06:33:31 -- accel/accel.sh@19 -- # IFS=: 00:07:26.531 06:33:31 -- accel/accel.sh@19 -- # read -r var val 00:07:26.531 06:33:31 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.531 06:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.531 06:33:31 -- accel/accel.sh@19 -- # IFS=: 00:07:26.531 06:33:31 -- accel/accel.sh@19 -- # read -r var val 00:07:26.531 06:33:31 -- accel/accel.sh@20 -- # val=Yes 00:07:26.532 06:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.532 06:33:31 -- accel/accel.sh@19 -- # IFS=: 00:07:26.532 06:33:31 -- accel/accel.sh@19 -- # read -r var val 00:07:26.532 06:33:31 -- accel/accel.sh@20 -- # val= 00:07:26.532 06:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.532 06:33:31 -- accel/accel.sh@19 -- # IFS=: 00:07:26.532 06:33:31 -- accel/accel.sh@19 -- # read -r var val 00:07:26.532 06:33:31 -- accel/accel.sh@20 -- # val= 00:07:26.532 06:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.532 06:33:31 -- accel/accel.sh@19 -- # IFS=: 00:07:26.532 06:33:31 -- accel/accel.sh@19 -- # read -r var val 00:07:27.904 06:33:32 -- accel/accel.sh@20 -- # val= 00:07:27.904 06:33:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.904 06:33:32 -- accel/accel.sh@19 -- # IFS=: 00:07:27.904 06:33:32 -- accel/accel.sh@19 -- # read -r var val 00:07:27.904 06:33:32 -- accel/accel.sh@20 -- # val= 00:07:27.904 06:33:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.904 06:33:32 -- accel/accel.sh@19 -- # IFS=: 00:07:27.904 06:33:32 -- accel/accel.sh@19 -- # read -r var val 00:07:27.904 06:33:32 -- accel/accel.sh@20 -- # val= 00:07:27.904 06:33:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.904 06:33:32 -- accel/accel.sh@19 -- # IFS=: 00:07:27.904 06:33:32 -- accel/accel.sh@19 -- # read -r var val 00:07:27.904 06:33:32 -- accel/accel.sh@20 -- # val= 00:07:27.904 06:33:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.904 06:33:32 -- accel/accel.sh@19 -- # IFS=: 00:07:27.904 06:33:32 -- accel/accel.sh@19 -- # read -r var val 00:07:27.904 06:33:32 -- accel/accel.sh@20 -- # val= 00:07:27.904 06:33:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.904 06:33:32 -- accel/accel.sh@19 -- # IFS=: 00:07:27.904 06:33:32 -- accel/accel.sh@19 -- # read -r var val 00:07:27.904 06:33:32 -- accel/accel.sh@20 -- # val= 00:07:27.904 06:33:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.904 06:33:32 -- accel/accel.sh@19 -- # IFS=: 00:07:27.904 06:33:32 -- accel/accel.sh@19 -- # read -r var val 00:07:27.904 06:33:32 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.904 06:33:32 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:27.904 06:33:32 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.904 00:07:27.904 real 0m1.395s 00:07:27.904 user 0m1.254s 00:07:27.904 sys 0m0.141s 00:07:27.904 06:33:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:27.904 06:33:32 -- common/autotest_common.sh@10 -- # set +x 00:07:27.904 ************************************ 00:07:27.904 END TEST accel_copy 00:07:27.904 ************************************ 00:07:27.904 06:33:32 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:27.904 06:33:32 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:27.904 06:33:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.904 06:33:32 -- common/autotest_common.sh@10 -- # set +x 00:07:27.904 ************************************ 00:07:27.904 START TEST accel_fill 00:07:27.904 ************************************ 00:07:27.904 06:33:32 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:27.904 06:33:32 -- accel/accel.sh@16 -- # local accel_opc 00:07:27.904 06:33:32 -- accel/accel.sh@17 -- # local accel_module 00:07:27.904 06:33:32 -- accel/accel.sh@19 -- # IFS=: 00:07:27.904 06:33:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:27.904 06:33:32 -- accel/accel.sh@19 -- # read -r var val 00:07:27.904 06:33:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:27.904 06:33:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.904 06:33:32 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.904 06:33:32 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.904 06:33:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.904 06:33:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.904 06:33:32 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.904 06:33:32 -- accel/accel.sh@40 -- # local IFS=, 00:07:27.904 06:33:32 -- accel/accel.sh@41 -- # jq -r . 00:07:27.904 [2024-04-17 06:33:32.364927] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:27.904 [2024-04-17 06:33:32.364990] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4072177 ] 00:07:27.904 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.904 [2024-04-17 06:33:32.426905] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.162 [2024-04-17 06:33:32.518061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.162 [2024-04-17 06:33:32.518676] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:28.162 06:33:32 -- accel/accel.sh@20 -- # val= 00:07:28.162 06:33:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # IFS=: 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # read -r var val 00:07:28.162 06:33:32 -- accel/accel.sh@20 -- # val= 00:07:28.162 06:33:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # IFS=: 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # read -r var val 00:07:28.162 06:33:32 -- accel/accel.sh@20 -- # val=0x1 00:07:28.162 06:33:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # IFS=: 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # read -r var val 00:07:28.162 06:33:32 -- accel/accel.sh@20 -- # val= 00:07:28.162 06:33:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # IFS=: 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # read -r var val 00:07:28.162 06:33:32 -- accel/accel.sh@20 -- # val= 00:07:28.162 06:33:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # IFS=: 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # read -r var val 00:07:28.162 06:33:32 -- accel/accel.sh@20 -- # val=fill 00:07:28.162 06:33:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.162 06:33:32 -- accel/accel.sh@23 -- # accel_opc=fill 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # IFS=: 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # read -r var val 00:07:28.162 06:33:32 -- accel/accel.sh@20 -- # val=0x80 00:07:28.162 06:33:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # IFS=: 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # read -r var val 00:07:28.162 06:33:32 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.162 06:33:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # IFS=: 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # read -r var val 00:07:28.162 06:33:32 -- accel/accel.sh@20 -- # val= 00:07:28.162 06:33:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # IFS=: 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # read -r var val 00:07:28.162 06:33:32 -- accel/accel.sh@20 -- # val=software 00:07:28.162 06:33:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.162 06:33:32 -- accel/accel.sh@22 -- # accel_module=software 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # IFS=: 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # read -r var val 00:07:28.162 06:33:32 -- accel/accel.sh@20 -- # val=64 00:07:28.162 06:33:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # IFS=: 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # read -r var val 00:07:28.162 06:33:32 -- accel/accel.sh@20 -- # val=64 00:07:28.162 06:33:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # IFS=: 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # read -r var val 00:07:28.162 06:33:32 -- accel/accel.sh@20 -- # val=1 00:07:28.162 06:33:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # IFS=: 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # read -r var val 00:07:28.162 06:33:32 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.162 06:33:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # IFS=: 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # read -r var val 00:07:28.162 06:33:32 -- accel/accel.sh@20 -- # val=Yes 00:07:28.162 06:33:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # IFS=: 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # read -r var val 00:07:28.162 06:33:32 -- accel/accel.sh@20 -- # val= 00:07:28.162 06:33:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # IFS=: 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # read -r var val 00:07:28.162 06:33:32 -- accel/accel.sh@20 -- # val= 00:07:28.162 06:33:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # IFS=: 00:07:28.162 06:33:32 -- accel/accel.sh@19 -- # read -r var val 00:07:29.536 06:33:33 -- accel/accel.sh@20 -- # val= 00:07:29.537 06:33:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.537 06:33:33 -- accel/accel.sh@19 -- # IFS=: 00:07:29.537 06:33:33 -- accel/accel.sh@19 -- # read -r var val 00:07:29.537 06:33:33 -- accel/accel.sh@20 -- # val= 00:07:29.537 06:33:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.537 06:33:33 -- accel/accel.sh@19 -- # IFS=: 00:07:29.537 06:33:33 -- accel/accel.sh@19 -- # read -r var val 00:07:29.537 06:33:33 -- accel/accel.sh@20 -- # val= 00:07:29.537 06:33:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.537 06:33:33 -- accel/accel.sh@19 -- # IFS=: 00:07:29.537 06:33:33 -- accel/accel.sh@19 -- # read -r var val 00:07:29.537 06:33:33 -- accel/accel.sh@20 -- # val= 00:07:29.537 06:33:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.537 06:33:33 -- accel/accel.sh@19 -- # IFS=: 00:07:29.537 06:33:33 -- accel/accel.sh@19 -- # read -r var val 00:07:29.537 06:33:33 -- accel/accel.sh@20 -- # val= 00:07:29.537 06:33:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.537 06:33:33 -- accel/accel.sh@19 -- # IFS=: 00:07:29.537 06:33:33 -- accel/accel.sh@19 -- # read -r var val 00:07:29.537 06:33:33 -- accel/accel.sh@20 -- # val= 00:07:29.537 06:33:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.537 06:33:33 -- accel/accel.sh@19 -- # IFS=: 00:07:29.537 06:33:33 -- accel/accel.sh@19 -- # read -r var val 00:07:29.537 06:33:33 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.537 06:33:33 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:29.537 06:33:33 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.537 00:07:29.537 real 0m1.396s 00:07:29.537 user 0m1.268s 00:07:29.537 sys 0m0.129s 00:07:29.537 06:33:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:29.537 06:33:33 -- common/autotest_common.sh@10 -- # set +x 00:07:29.537 ************************************ 00:07:29.537 END TEST accel_fill 00:07:29.537 ************************************ 00:07:29.537 06:33:33 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:29.537 06:33:33 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:29.537 06:33:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.537 06:33:33 -- common/autotest_common.sh@10 -- # set +x 00:07:29.537 ************************************ 00:07:29.537 START TEST accel_copy_crc32c 00:07:29.537 ************************************ 00:07:29.537 06:33:33 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:07:29.537 06:33:33 -- accel/accel.sh@16 -- # local accel_opc 00:07:29.537 06:33:33 -- accel/accel.sh@17 -- # local accel_module 00:07:29.537 06:33:33 -- accel/accel.sh@19 -- # IFS=: 00:07:29.537 06:33:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:29.537 06:33:33 -- accel/accel.sh@19 -- # read -r var val 00:07:29.537 06:33:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:29.537 06:33:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.537 06:33:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.537 06:33:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.537 06:33:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.537 06:33:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.537 06:33:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.537 06:33:33 -- accel/accel.sh@40 -- # local IFS=, 00:07:29.537 06:33:33 -- accel/accel.sh@41 -- # jq -r . 00:07:29.537 [2024-04-17 06:33:33.886553] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:29.537 [2024-04-17 06:33:33.886617] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4072348 ] 00:07:29.537 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.537 [2024-04-17 06:33:33.950472] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.537 [2024-04-17 06:33:34.040087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.537 [2024-04-17 06:33:34.040779] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:29.537 06:33:34 -- accel/accel.sh@20 -- # val= 00:07:29.537 06:33:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # IFS=: 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # read -r var val 00:07:29.537 06:33:34 -- accel/accel.sh@20 -- # val= 00:07:29.537 06:33:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # IFS=: 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # read -r var val 00:07:29.537 06:33:34 -- accel/accel.sh@20 -- # val=0x1 00:07:29.537 06:33:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # IFS=: 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # read -r var val 00:07:29.537 06:33:34 -- accel/accel.sh@20 -- # val= 00:07:29.537 06:33:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # IFS=: 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # read -r var val 00:07:29.537 06:33:34 -- accel/accel.sh@20 -- # val= 00:07:29.537 06:33:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # IFS=: 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # read -r var val 00:07:29.537 06:33:34 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:29.537 06:33:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.537 06:33:34 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # IFS=: 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # read -r var val 00:07:29.537 06:33:34 -- accel/accel.sh@20 -- # val=0 00:07:29.537 06:33:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # IFS=: 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # read -r var val 00:07:29.537 06:33:34 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.537 06:33:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # IFS=: 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # read -r var val 00:07:29.537 06:33:34 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.537 06:33:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # IFS=: 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # read -r var val 00:07:29.537 06:33:34 -- accel/accel.sh@20 -- # val= 00:07:29.537 06:33:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # IFS=: 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # read -r var val 00:07:29.537 06:33:34 -- accel/accel.sh@20 -- # val=software 00:07:29.537 06:33:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.537 06:33:34 -- accel/accel.sh@22 -- # accel_module=software 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # IFS=: 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # read -r var val 00:07:29.537 06:33:34 -- accel/accel.sh@20 -- # val=32 00:07:29.537 06:33:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # IFS=: 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # read -r var val 00:07:29.537 06:33:34 -- accel/accel.sh@20 -- # val=32 00:07:29.537 06:33:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # IFS=: 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # read -r var val 00:07:29.537 06:33:34 -- accel/accel.sh@20 -- # val=1 00:07:29.537 06:33:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # IFS=: 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # read -r var val 00:07:29.537 06:33:34 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.537 06:33:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # IFS=: 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # read -r var val 00:07:29.537 06:33:34 -- accel/accel.sh@20 -- # val=Yes 00:07:29.537 06:33:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # IFS=: 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # read -r var val 00:07:29.537 06:33:34 -- accel/accel.sh@20 -- # val= 00:07:29.537 06:33:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # IFS=: 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # read -r var val 00:07:29.537 06:33:34 -- accel/accel.sh@20 -- # val= 00:07:29.537 06:33:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # IFS=: 00:07:29.537 06:33:34 -- accel/accel.sh@19 -- # read -r var val 00:07:30.910 06:33:35 -- accel/accel.sh@20 -- # val= 00:07:30.910 06:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.910 06:33:35 -- accel/accel.sh@19 -- # IFS=: 00:07:30.910 06:33:35 -- accel/accel.sh@19 -- # read -r var val 00:07:30.910 06:33:35 -- accel/accel.sh@20 -- # val= 00:07:30.910 06:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.910 06:33:35 -- accel/accel.sh@19 -- # IFS=: 00:07:30.910 06:33:35 -- accel/accel.sh@19 -- # read -r var val 00:07:30.910 06:33:35 -- accel/accel.sh@20 -- # val= 00:07:30.910 06:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.910 06:33:35 -- accel/accel.sh@19 -- # IFS=: 00:07:30.910 06:33:35 -- accel/accel.sh@19 -- # read -r var val 00:07:30.910 06:33:35 -- accel/accel.sh@20 -- # val= 00:07:30.910 06:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.910 06:33:35 -- accel/accel.sh@19 -- # IFS=: 00:07:30.910 06:33:35 -- accel/accel.sh@19 -- # read -r var val 00:07:30.910 06:33:35 -- accel/accel.sh@20 -- # val= 00:07:30.910 06:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.910 06:33:35 -- accel/accel.sh@19 -- # IFS=: 00:07:30.910 06:33:35 -- accel/accel.sh@19 -- # read -r var val 00:07:30.910 06:33:35 -- accel/accel.sh@20 -- # val= 00:07:30.910 06:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.910 06:33:35 -- accel/accel.sh@19 -- # IFS=: 00:07:30.910 06:33:35 -- accel/accel.sh@19 -- # read -r var val 00:07:30.910 06:33:35 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.910 06:33:35 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:30.910 06:33:35 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.910 00:07:30.910 real 0m1.408s 00:07:30.910 user 0m1.268s 00:07:30.910 sys 0m0.142s 00:07:30.910 06:33:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:30.910 06:33:35 -- common/autotest_common.sh@10 -- # set +x 00:07:30.910 ************************************ 00:07:30.910 END TEST accel_copy_crc32c 00:07:30.910 ************************************ 00:07:30.910 06:33:35 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:30.910 06:33:35 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:30.910 06:33:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:30.910 06:33:35 -- common/autotest_common.sh@10 -- # set +x 00:07:30.910 ************************************ 00:07:30.910 START TEST accel_copy_crc32c_C2 00:07:30.910 ************************************ 00:07:30.910 06:33:35 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:30.910 06:33:35 -- accel/accel.sh@16 -- # local accel_opc 00:07:30.910 06:33:35 -- accel/accel.sh@17 -- # local accel_module 00:07:30.910 06:33:35 -- accel/accel.sh@19 -- # IFS=: 00:07:30.910 06:33:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:30.910 06:33:35 -- accel/accel.sh@19 -- # read -r var val 00:07:30.910 06:33:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:30.910 06:33:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.910 06:33:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.910 06:33:35 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.911 06:33:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.911 06:33:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.911 06:33:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.911 06:33:35 -- accel/accel.sh@40 -- # local IFS=, 00:07:30.911 06:33:35 -- accel/accel.sh@41 -- # jq -r . 00:07:30.911 [2024-04-17 06:33:35.413514] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:30.911 [2024-04-17 06:33:35.413579] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4072507 ] 00:07:30.911 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.911 [2024-04-17 06:33:35.478765] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.169 [2024-04-17 06:33:35.571558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.169 [2024-04-17 06:33:35.572203] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:31.169 06:33:35 -- accel/accel.sh@20 -- # val= 00:07:31.169 06:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # IFS=: 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # read -r var val 00:07:31.169 06:33:35 -- accel/accel.sh@20 -- # val= 00:07:31.169 06:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # IFS=: 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # read -r var val 00:07:31.169 06:33:35 -- accel/accel.sh@20 -- # val=0x1 00:07:31.169 06:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # IFS=: 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # read -r var val 00:07:31.169 06:33:35 -- accel/accel.sh@20 -- # val= 00:07:31.169 06:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # IFS=: 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # read -r var val 00:07:31.169 06:33:35 -- accel/accel.sh@20 -- # val= 00:07:31.169 06:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # IFS=: 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # read -r var val 00:07:31.169 06:33:35 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:31.169 06:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.169 06:33:35 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # IFS=: 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # read -r var val 00:07:31.169 06:33:35 -- accel/accel.sh@20 -- # val=0 00:07:31.169 06:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # IFS=: 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # read -r var val 00:07:31.169 06:33:35 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.169 06:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # IFS=: 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # read -r var val 00:07:31.169 06:33:35 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:31.169 06:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # IFS=: 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # read -r var val 00:07:31.169 06:33:35 -- accel/accel.sh@20 -- # val= 00:07:31.169 06:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # IFS=: 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # read -r var val 00:07:31.169 06:33:35 -- accel/accel.sh@20 -- # val=software 00:07:31.169 06:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.169 06:33:35 -- accel/accel.sh@22 -- # accel_module=software 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # IFS=: 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # read -r var val 00:07:31.169 06:33:35 -- accel/accel.sh@20 -- # val=32 00:07:31.169 06:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # IFS=: 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # read -r var val 00:07:31.169 06:33:35 -- accel/accel.sh@20 -- # val=32 00:07:31.169 06:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # IFS=: 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # read -r var val 00:07:31.169 06:33:35 -- accel/accel.sh@20 -- # val=1 00:07:31.169 06:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # IFS=: 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # read -r var val 00:07:31.169 06:33:35 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.169 06:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # IFS=: 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # read -r var val 00:07:31.169 06:33:35 -- accel/accel.sh@20 -- # val=Yes 00:07:31.169 06:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # IFS=: 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # read -r var val 00:07:31.169 06:33:35 -- accel/accel.sh@20 -- # val= 00:07:31.169 06:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # IFS=: 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # read -r var val 00:07:31.169 06:33:35 -- accel/accel.sh@20 -- # val= 00:07:31.169 06:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # IFS=: 00:07:31.169 06:33:35 -- accel/accel.sh@19 -- # read -r var val 00:07:32.543 06:33:36 -- accel/accel.sh@20 -- # val= 00:07:32.543 06:33:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.543 06:33:36 -- accel/accel.sh@19 -- # IFS=: 00:07:32.543 06:33:36 -- accel/accel.sh@19 -- # read -r var val 00:07:32.543 06:33:36 -- accel/accel.sh@20 -- # val= 00:07:32.543 06:33:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.543 06:33:36 -- accel/accel.sh@19 -- # IFS=: 00:07:32.543 06:33:36 -- accel/accel.sh@19 -- # read -r var val 00:07:32.543 06:33:36 -- accel/accel.sh@20 -- # val= 00:07:32.543 06:33:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.543 06:33:36 -- accel/accel.sh@19 -- # IFS=: 00:07:32.543 06:33:36 -- accel/accel.sh@19 -- # read -r var val 00:07:32.543 06:33:36 -- accel/accel.sh@20 -- # val= 00:07:32.543 06:33:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.543 06:33:36 -- accel/accel.sh@19 -- # IFS=: 00:07:32.543 06:33:36 -- accel/accel.sh@19 -- # read -r var val 00:07:32.543 06:33:36 -- accel/accel.sh@20 -- # val= 00:07:32.543 06:33:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.543 06:33:36 -- accel/accel.sh@19 -- # IFS=: 00:07:32.543 06:33:36 -- accel/accel.sh@19 -- # read -r var val 00:07:32.543 06:33:36 -- accel/accel.sh@20 -- # val= 00:07:32.543 06:33:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.543 06:33:36 -- accel/accel.sh@19 -- # IFS=: 00:07:32.543 06:33:36 -- accel/accel.sh@19 -- # read -r var val 00:07:32.543 06:33:36 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.543 06:33:36 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:32.543 06:33:36 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.543 00:07:32.543 real 0m1.399s 00:07:32.543 user 0m1.258s 00:07:32.543 sys 0m0.142s 00:07:32.543 06:33:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:32.543 06:33:36 -- common/autotest_common.sh@10 -- # set +x 00:07:32.543 ************************************ 00:07:32.543 END TEST accel_copy_crc32c_C2 00:07:32.543 ************************************ 00:07:32.543 06:33:36 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:32.543 06:33:36 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:32.543 06:33:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:32.543 06:33:36 -- common/autotest_common.sh@10 -- # set +x 00:07:32.543 ************************************ 00:07:32.543 START TEST accel_dualcast 00:07:32.543 ************************************ 00:07:32.543 06:33:36 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:07:32.543 06:33:36 -- accel/accel.sh@16 -- # local accel_opc 00:07:32.543 06:33:36 -- accel/accel.sh@17 -- # local accel_module 00:07:32.543 06:33:36 -- accel/accel.sh@19 -- # IFS=: 00:07:32.543 06:33:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:32.543 06:33:36 -- accel/accel.sh@19 -- # read -r var val 00:07:32.543 06:33:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:32.543 06:33:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.543 06:33:36 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.543 06:33:36 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.543 06:33:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.543 06:33:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.543 06:33:36 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.543 06:33:36 -- accel/accel.sh@40 -- # local IFS=, 00:07:32.543 06:33:36 -- accel/accel.sh@41 -- # jq -r . 00:07:32.543 [2024-04-17 06:33:36.933058] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:32.543 [2024-04-17 06:33:36.933120] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4072790 ] 00:07:32.543 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.543 [2024-04-17 06:33:36.995198] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.543 [2024-04-17 06:33:37.086343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.543 [2024-04-17 06:33:37.086980] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:32.543 06:33:37 -- accel/accel.sh@20 -- # val= 00:07:32.543 06:33:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.543 06:33:37 -- accel/accel.sh@19 -- # IFS=: 00:07:32.543 06:33:37 -- accel/accel.sh@19 -- # read -r var val 00:07:32.543 06:33:37 -- accel/accel.sh@20 -- # val= 00:07:32.543 06:33:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.543 06:33:37 -- accel/accel.sh@19 -- # IFS=: 00:07:32.543 06:33:37 -- accel/accel.sh@19 -- # read -r var val 00:07:32.543 06:33:37 -- accel/accel.sh@20 -- # val=0x1 00:07:32.543 06:33:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.543 06:33:37 -- accel/accel.sh@19 -- # IFS=: 00:07:32.543 06:33:37 -- accel/accel.sh@19 -- # read -r var val 00:07:32.543 06:33:37 -- accel/accel.sh@20 -- # val= 00:07:32.543 06:33:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.543 06:33:37 -- accel/accel.sh@19 -- # IFS=: 00:07:32.543 06:33:37 -- accel/accel.sh@19 -- # read -r var val 00:07:32.543 06:33:37 -- accel/accel.sh@20 -- # val= 00:07:32.543 06:33:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.543 06:33:37 -- accel/accel.sh@19 -- # IFS=: 00:07:32.543 06:33:37 -- accel/accel.sh@19 -- # read -r var val 00:07:32.543 06:33:37 -- accel/accel.sh@20 -- # val=dualcast 00:07:32.543 06:33:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.543 06:33:37 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:32.543 06:33:37 -- accel/accel.sh@19 -- # IFS=: 00:07:32.543 06:33:37 -- accel/accel.sh@19 -- # read -r var val 00:07:32.543 06:33:37 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.543 06:33:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.543 06:33:37 -- accel/accel.sh@19 -- # IFS=: 00:07:32.543 06:33:37 -- accel/accel.sh@19 -- # read -r var val 00:07:32.543 06:33:37 -- accel/accel.sh@20 -- # val= 00:07:32.543 06:33:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.543 06:33:37 -- accel/accel.sh@19 -- # IFS=: 00:07:32.543 06:33:37 -- accel/accel.sh@19 -- # read -r var val 00:07:32.543 06:33:37 -- accel/accel.sh@20 -- # val=software 00:07:32.543 06:33:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.543 06:33:37 -- accel/accel.sh@22 -- # accel_module=software 00:07:32.543 06:33:37 -- accel/accel.sh@19 -- # IFS=: 00:07:32.543 06:33:37 -- accel/accel.sh@19 -- # read -r var val 00:07:32.543 06:33:37 -- accel/accel.sh@20 -- # val=32 00:07:32.543 06:33:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.543 06:33:37 -- accel/accel.sh@19 -- # IFS=: 00:07:32.543 06:33:37 -- accel/accel.sh@19 -- # read -r var val 00:07:32.543 06:33:37 -- accel/accel.sh@20 -- # val=32 00:07:32.543 06:33:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.543 06:33:37 -- accel/accel.sh@19 -- # IFS=: 00:07:32.543 06:33:37 -- accel/accel.sh@19 -- # read -r var val 00:07:32.543 06:33:37 -- accel/accel.sh@20 -- # val=1 00:07:32.543 06:33:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.543 06:33:37 -- accel/accel.sh@19 -- # IFS=: 00:07:32.543 06:33:37 -- accel/accel.sh@19 -- # read -r var val 00:07:32.543 06:33:37 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.543 06:33:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.543 06:33:37 -- accel/accel.sh@19 -- # IFS=: 00:07:32.543 06:33:37 -- accel/accel.sh@19 -- # read -r var val 00:07:32.543 06:33:37 -- accel/accel.sh@20 -- # val=Yes 00:07:32.543 06:33:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.543 06:33:37 -- accel/accel.sh@19 -- # IFS=: 00:07:32.543 06:33:37 -- accel/accel.sh@19 -- # read -r var val 00:07:32.543 06:33:37 -- accel/accel.sh@20 -- # val= 00:07:32.543 06:33:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.543 06:33:37 -- accel/accel.sh@19 -- # IFS=: 00:07:32.543 06:33:37 -- accel/accel.sh@19 -- # read -r var val 00:07:32.543 06:33:37 -- accel/accel.sh@20 -- # val= 00:07:32.543 06:33:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.543 06:33:37 -- accel/accel.sh@19 -- # IFS=: 00:07:32.803 06:33:37 -- accel/accel.sh@19 -- # read -r var val 00:07:33.771 06:33:38 -- accel/accel.sh@20 -- # val= 00:07:33.771 06:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.771 06:33:38 -- accel/accel.sh@19 -- # IFS=: 00:07:33.771 06:33:38 -- accel/accel.sh@19 -- # read -r var val 00:07:33.771 06:33:38 -- accel/accel.sh@20 -- # val= 00:07:33.771 06:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.771 06:33:38 -- accel/accel.sh@19 -- # IFS=: 00:07:33.771 06:33:38 -- accel/accel.sh@19 -- # read -r var val 00:07:33.771 06:33:38 -- accel/accel.sh@20 -- # val= 00:07:33.771 06:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.771 06:33:38 -- accel/accel.sh@19 -- # IFS=: 00:07:33.771 06:33:38 -- accel/accel.sh@19 -- # read -r var val 00:07:33.771 06:33:38 -- accel/accel.sh@20 -- # val= 00:07:33.771 06:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.771 06:33:38 -- accel/accel.sh@19 -- # IFS=: 00:07:33.771 06:33:38 -- accel/accel.sh@19 -- # read -r var val 00:07:33.771 06:33:38 -- accel/accel.sh@20 -- # val= 00:07:33.771 06:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.771 06:33:38 -- accel/accel.sh@19 -- # IFS=: 00:07:33.771 06:33:38 -- accel/accel.sh@19 -- # read -r var val 00:07:33.771 06:33:38 -- accel/accel.sh@20 -- # val= 00:07:33.771 06:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.771 06:33:38 -- accel/accel.sh@19 -- # IFS=: 00:07:33.771 06:33:38 -- accel/accel.sh@19 -- # read -r var val 00:07:33.771 06:33:38 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.771 06:33:38 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:33.771 06:33:38 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.771 00:07:33.771 real 0m1.395s 00:07:33.771 user 0m1.255s 00:07:33.771 sys 0m0.141s 00:07:33.771 06:33:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:33.771 06:33:38 -- common/autotest_common.sh@10 -- # set +x 00:07:33.771 ************************************ 00:07:33.771 END TEST accel_dualcast 00:07:33.771 ************************************ 00:07:33.771 06:33:38 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:33.771 06:33:38 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:33.771 06:33:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.771 06:33:38 -- common/autotest_common.sh@10 -- # set +x 00:07:34.030 ************************************ 00:07:34.030 START TEST accel_compare 00:07:34.030 ************************************ 00:07:34.030 06:33:38 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:07:34.030 06:33:38 -- accel/accel.sh@16 -- # local accel_opc 00:07:34.030 06:33:38 -- accel/accel.sh@17 -- # local accel_module 00:07:34.030 06:33:38 -- accel/accel.sh@19 -- # IFS=: 00:07:34.030 06:33:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:34.030 06:33:38 -- accel/accel.sh@19 -- # read -r var val 00:07:34.030 06:33:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:34.030 06:33:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.030 06:33:38 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.030 06:33:38 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.030 06:33:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.030 06:33:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.030 06:33:38 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.030 06:33:38 -- accel/accel.sh@40 -- # local IFS=, 00:07:34.030 06:33:38 -- accel/accel.sh@41 -- # jq -r . 00:07:34.030 [2024-04-17 06:33:38.452780] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:34.030 [2024-04-17 06:33:38.452841] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4072957 ] 00:07:34.030 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.030 [2024-04-17 06:33:38.516532] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.030 [2024-04-17 06:33:38.608149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.030 [2024-04-17 06:33:38.608841] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:34.288 06:33:38 -- accel/accel.sh@20 -- # val= 00:07:34.288 06:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.288 06:33:38 -- accel/accel.sh@19 -- # IFS=: 00:07:34.288 06:33:38 -- accel/accel.sh@19 -- # read -r var val 00:07:34.288 06:33:38 -- accel/accel.sh@20 -- # val= 00:07:34.288 06:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.288 06:33:38 -- accel/accel.sh@19 -- # IFS=: 00:07:34.289 06:33:38 -- accel/accel.sh@19 -- # read -r var val 00:07:34.289 06:33:38 -- accel/accel.sh@20 -- # val=0x1 00:07:34.289 06:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.289 06:33:38 -- accel/accel.sh@19 -- # IFS=: 00:07:34.289 06:33:38 -- accel/accel.sh@19 -- # read -r var val 00:07:34.289 06:33:38 -- accel/accel.sh@20 -- # val= 00:07:34.289 06:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.289 06:33:38 -- accel/accel.sh@19 -- # IFS=: 00:07:34.289 06:33:38 -- accel/accel.sh@19 -- # read -r var val 00:07:34.289 06:33:38 -- accel/accel.sh@20 -- # val= 00:07:34.289 06:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.289 06:33:38 -- accel/accel.sh@19 -- # IFS=: 00:07:34.289 06:33:38 -- accel/accel.sh@19 -- # read -r var val 00:07:34.289 06:33:38 -- accel/accel.sh@20 -- # val=compare 00:07:34.289 06:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.289 06:33:38 -- accel/accel.sh@23 -- # accel_opc=compare 00:07:34.289 06:33:38 -- accel/accel.sh@19 -- # IFS=: 00:07:34.289 06:33:38 -- accel/accel.sh@19 -- # read -r var val 00:07:34.289 06:33:38 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.289 06:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.289 06:33:38 -- accel/accel.sh@19 -- # IFS=: 00:07:34.289 06:33:38 -- accel/accel.sh@19 -- # read -r var val 00:07:34.289 06:33:38 -- accel/accel.sh@20 -- # val= 00:07:34.289 06:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.289 06:33:38 -- accel/accel.sh@19 -- # IFS=: 00:07:34.289 06:33:38 -- accel/accel.sh@19 -- # read -r var val 00:07:34.289 06:33:38 -- accel/accel.sh@20 -- # val=software 00:07:34.289 06:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.289 06:33:38 -- accel/accel.sh@22 -- # accel_module=software 00:07:34.289 06:33:38 -- accel/accel.sh@19 -- # IFS=: 00:07:34.289 06:33:38 -- accel/accel.sh@19 -- # read -r var val 00:07:34.289 06:33:38 -- accel/accel.sh@20 -- # val=32 00:07:34.289 06:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.289 06:33:38 -- accel/accel.sh@19 -- # IFS=: 00:07:34.289 06:33:38 -- accel/accel.sh@19 -- # read -r var val 00:07:34.289 06:33:38 -- accel/accel.sh@20 -- # val=32 00:07:34.289 06:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.289 06:33:38 -- accel/accel.sh@19 -- # IFS=: 00:07:34.289 06:33:38 -- accel/accel.sh@19 -- # read -r var val 00:07:34.289 06:33:38 -- accel/accel.sh@20 -- # val=1 00:07:34.289 06:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.289 06:33:38 -- accel/accel.sh@19 -- # IFS=: 00:07:34.289 06:33:38 -- accel/accel.sh@19 -- # read -r var val 00:07:34.289 06:33:38 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.289 06:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.289 06:33:38 -- accel/accel.sh@19 -- # IFS=: 00:07:34.289 06:33:38 -- accel/accel.sh@19 -- # read -r var val 00:07:34.289 06:33:38 -- accel/accel.sh@20 -- # val=Yes 00:07:34.289 06:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.289 06:33:38 -- accel/accel.sh@19 -- # IFS=: 00:07:34.289 06:33:38 -- accel/accel.sh@19 -- # read -r var val 00:07:34.289 06:33:38 -- accel/accel.sh@20 -- # val= 00:07:34.289 06:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.289 06:33:38 -- accel/accel.sh@19 -- # IFS=: 00:07:34.289 06:33:38 -- accel/accel.sh@19 -- # read -r var val 00:07:34.289 06:33:38 -- accel/accel.sh@20 -- # val= 00:07:34.289 06:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.289 06:33:38 -- accel/accel.sh@19 -- # IFS=: 00:07:34.289 06:33:38 -- accel/accel.sh@19 -- # read -r var val 00:07:35.664 06:33:39 -- accel/accel.sh@20 -- # val= 00:07:35.664 06:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.664 06:33:39 -- accel/accel.sh@19 -- # IFS=: 00:07:35.664 06:33:39 -- accel/accel.sh@19 -- # read -r var val 00:07:35.664 06:33:39 -- accel/accel.sh@20 -- # val= 00:07:35.664 06:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.664 06:33:39 -- accel/accel.sh@19 -- # IFS=: 00:07:35.664 06:33:39 -- accel/accel.sh@19 -- # read -r var val 00:07:35.664 06:33:39 -- accel/accel.sh@20 -- # val= 00:07:35.664 06:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.664 06:33:39 -- accel/accel.sh@19 -- # IFS=: 00:07:35.664 06:33:39 -- accel/accel.sh@19 -- # read -r var val 00:07:35.664 06:33:39 -- accel/accel.sh@20 -- # val= 00:07:35.664 06:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.664 06:33:39 -- accel/accel.sh@19 -- # IFS=: 00:07:35.664 06:33:39 -- accel/accel.sh@19 -- # read -r var val 00:07:35.664 06:33:39 -- accel/accel.sh@20 -- # val= 00:07:35.664 06:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.664 06:33:39 -- accel/accel.sh@19 -- # IFS=: 00:07:35.664 06:33:39 -- accel/accel.sh@19 -- # read -r var val 00:07:35.664 06:33:39 -- accel/accel.sh@20 -- # val= 00:07:35.664 06:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.664 06:33:39 -- accel/accel.sh@19 -- # IFS=: 00:07:35.664 06:33:39 -- accel/accel.sh@19 -- # read -r var val 00:07:35.664 06:33:39 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.664 06:33:39 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:35.664 06:33:39 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.664 00:07:35.664 real 0m1.407s 00:07:35.664 user 0m1.265s 00:07:35.664 sys 0m0.142s 00:07:35.664 06:33:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:35.664 06:33:39 -- common/autotest_common.sh@10 -- # set +x 00:07:35.664 ************************************ 00:07:35.664 END TEST accel_compare 00:07:35.664 ************************************ 00:07:35.664 06:33:39 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:35.664 06:33:39 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:35.664 06:33:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.664 06:33:39 -- common/autotest_common.sh@10 -- # set +x 00:07:35.664 ************************************ 00:07:35.664 START TEST accel_xor 00:07:35.664 ************************************ 00:07:35.664 06:33:39 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:07:35.664 06:33:39 -- accel/accel.sh@16 -- # local accel_opc 00:07:35.664 06:33:39 -- accel/accel.sh@17 -- # local accel_module 00:07:35.664 06:33:39 -- accel/accel.sh@19 -- # IFS=: 00:07:35.664 06:33:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:35.664 06:33:39 -- accel/accel.sh@19 -- # read -r var val 00:07:35.664 06:33:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:35.664 06:33:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.664 06:33:39 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.664 06:33:39 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.664 06:33:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.664 06:33:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.664 06:33:39 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.664 06:33:39 -- accel/accel.sh@40 -- # local IFS=, 00:07:35.664 06:33:39 -- accel/accel.sh@41 -- # jq -r . 00:07:35.664 [2024-04-17 06:33:39.984770] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:35.664 [2024-04-17 06:33:39.984836] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4073122 ] 00:07:35.664 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.664 [2024-04-17 06:33:40.050493] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.664 [2024-04-17 06:33:40.142707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.664 [2024-04-17 06:33:40.143402] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:35.664 06:33:40 -- accel/accel.sh@20 -- # val= 00:07:35.664 06:33:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.664 06:33:40 -- accel/accel.sh@19 -- # IFS=: 00:07:35.664 06:33:40 -- accel/accel.sh@19 -- # read -r var val 00:07:35.664 06:33:40 -- accel/accel.sh@20 -- # val= 00:07:35.664 06:33:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.664 06:33:40 -- accel/accel.sh@19 -- # IFS=: 00:07:35.664 06:33:40 -- accel/accel.sh@19 -- # read -r var val 00:07:35.664 06:33:40 -- accel/accel.sh@20 -- # val=0x1 00:07:35.664 06:33:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.664 06:33:40 -- accel/accel.sh@19 -- # IFS=: 00:07:35.664 06:33:40 -- accel/accel.sh@19 -- # read -r var val 00:07:35.664 06:33:40 -- accel/accel.sh@20 -- # val= 00:07:35.664 06:33:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.664 06:33:40 -- accel/accel.sh@19 -- # IFS=: 00:07:35.664 06:33:40 -- accel/accel.sh@19 -- # read -r var val 00:07:35.664 06:33:40 -- accel/accel.sh@20 -- # val= 00:07:35.664 06:33:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.664 06:33:40 -- accel/accel.sh@19 -- # IFS=: 00:07:35.664 06:33:40 -- accel/accel.sh@19 -- # read -r var val 00:07:35.664 06:33:40 -- accel/accel.sh@20 -- # val=xor 00:07:35.664 06:33:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.664 06:33:40 -- accel/accel.sh@23 -- # accel_opc=xor 00:07:35.664 06:33:40 -- accel/accel.sh@19 -- # IFS=: 00:07:35.664 06:33:40 -- accel/accel.sh@19 -- # read -r var val 00:07:35.664 06:33:40 -- accel/accel.sh@20 -- # val=2 00:07:35.664 06:33:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.664 06:33:40 -- accel/accel.sh@19 -- # IFS=: 00:07:35.664 06:33:40 -- accel/accel.sh@19 -- # read -r var val 00:07:35.664 06:33:40 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.664 06:33:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.664 06:33:40 -- accel/accel.sh@19 -- # IFS=: 00:07:35.664 06:33:40 -- accel/accel.sh@19 -- # read -r var val 00:07:35.664 06:33:40 -- accel/accel.sh@20 -- # val= 00:07:35.664 06:33:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.664 06:33:40 -- accel/accel.sh@19 -- # IFS=: 00:07:35.664 06:33:40 -- accel/accel.sh@19 -- # read -r var val 00:07:35.664 06:33:40 -- accel/accel.sh@20 -- # val=software 00:07:35.664 06:33:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.664 06:33:40 -- accel/accel.sh@22 -- # accel_module=software 00:07:35.664 06:33:40 -- accel/accel.sh@19 -- # IFS=: 00:07:35.664 06:33:40 -- accel/accel.sh@19 -- # read -r var val 00:07:35.664 06:33:40 -- accel/accel.sh@20 -- # val=32 00:07:35.664 06:33:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.665 06:33:40 -- accel/accel.sh@19 -- # IFS=: 00:07:35.665 06:33:40 -- accel/accel.sh@19 -- # read -r var val 00:07:35.665 06:33:40 -- accel/accel.sh@20 -- # val=32 00:07:35.665 06:33:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.665 06:33:40 -- accel/accel.sh@19 -- # IFS=: 00:07:35.665 06:33:40 -- accel/accel.sh@19 -- # read -r var val 00:07:35.665 06:33:40 -- accel/accel.sh@20 -- # val=1 00:07:35.665 06:33:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.665 06:33:40 -- accel/accel.sh@19 -- # IFS=: 00:07:35.665 06:33:40 -- accel/accel.sh@19 -- # read -r var val 00:07:35.665 06:33:40 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.665 06:33:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.665 06:33:40 -- accel/accel.sh@19 -- # IFS=: 00:07:35.665 06:33:40 -- accel/accel.sh@19 -- # read -r var val 00:07:35.665 06:33:40 -- accel/accel.sh@20 -- # val=Yes 00:07:35.665 06:33:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.665 06:33:40 -- accel/accel.sh@19 -- # IFS=: 00:07:35.665 06:33:40 -- accel/accel.sh@19 -- # read -r var val 00:07:35.665 06:33:40 -- accel/accel.sh@20 -- # val= 00:07:35.665 06:33:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.665 06:33:40 -- accel/accel.sh@19 -- # IFS=: 00:07:35.665 06:33:40 -- accel/accel.sh@19 -- # read -r var val 00:07:35.665 06:33:40 -- accel/accel.sh@20 -- # val= 00:07:35.665 06:33:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.665 06:33:40 -- accel/accel.sh@19 -- # IFS=: 00:07:35.665 06:33:40 -- accel/accel.sh@19 -- # read -r var val 00:07:37.041 06:33:41 -- accel/accel.sh@20 -- # val= 00:07:37.041 06:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.041 06:33:41 -- accel/accel.sh@19 -- # IFS=: 00:07:37.041 06:33:41 -- accel/accel.sh@19 -- # read -r var val 00:07:37.041 06:33:41 -- accel/accel.sh@20 -- # val= 00:07:37.041 06:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.041 06:33:41 -- accel/accel.sh@19 -- # IFS=: 00:07:37.041 06:33:41 -- accel/accel.sh@19 -- # read -r var val 00:07:37.041 06:33:41 -- accel/accel.sh@20 -- # val= 00:07:37.041 06:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.041 06:33:41 -- accel/accel.sh@19 -- # IFS=: 00:07:37.041 06:33:41 -- accel/accel.sh@19 -- # read -r var val 00:07:37.041 06:33:41 -- accel/accel.sh@20 -- # val= 00:07:37.041 06:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.041 06:33:41 -- accel/accel.sh@19 -- # IFS=: 00:07:37.041 06:33:41 -- accel/accel.sh@19 -- # read -r var val 00:07:37.041 06:33:41 -- accel/accel.sh@20 -- # val= 00:07:37.041 06:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.041 06:33:41 -- accel/accel.sh@19 -- # IFS=: 00:07:37.041 06:33:41 -- accel/accel.sh@19 -- # read -r var val 00:07:37.041 06:33:41 -- accel/accel.sh@20 -- # val= 00:07:37.041 06:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.041 06:33:41 -- accel/accel.sh@19 -- # IFS=: 00:07:37.041 06:33:41 -- accel/accel.sh@19 -- # read -r var val 00:07:37.041 06:33:41 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:37.041 06:33:41 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:37.041 06:33:41 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.041 00:07:37.041 real 0m1.403s 00:07:37.041 user 0m1.254s 00:07:37.041 sys 0m0.149s 00:07:37.041 06:33:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:37.041 06:33:41 -- common/autotest_common.sh@10 -- # set +x 00:07:37.041 ************************************ 00:07:37.041 END TEST accel_xor 00:07:37.041 ************************************ 00:07:37.041 06:33:41 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:37.041 06:33:41 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:37.041 06:33:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.041 06:33:41 -- common/autotest_common.sh@10 -- # set +x 00:07:37.041 ************************************ 00:07:37.041 START TEST accel_xor 00:07:37.041 ************************************ 00:07:37.041 06:33:41 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:07:37.041 06:33:41 -- accel/accel.sh@16 -- # local accel_opc 00:07:37.041 06:33:41 -- accel/accel.sh@17 -- # local accel_module 00:07:37.041 06:33:41 -- accel/accel.sh@19 -- # IFS=: 00:07:37.041 06:33:41 -- accel/accel.sh@19 -- # read -r var val 00:07:37.041 06:33:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:37.041 06:33:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:37.041 06:33:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.041 06:33:41 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.041 06:33:41 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.041 06:33:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.041 06:33:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.041 06:33:41 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.041 06:33:41 -- accel/accel.sh@40 -- # local IFS=, 00:07:37.041 06:33:41 -- accel/accel.sh@41 -- # jq -r . 00:07:37.041 [2024-04-17 06:33:41.503065] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:37.041 [2024-04-17 06:33:41.503132] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4073396 ] 00:07:37.041 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.041 [2024-04-17 06:33:41.566190] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.300 [2024-04-17 06:33:41.658546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.300 [2024-04-17 06:33:41.659208] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:37.300 06:33:41 -- accel/accel.sh@20 -- # val= 00:07:37.300 06:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # IFS=: 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # read -r var val 00:07:37.300 06:33:41 -- accel/accel.sh@20 -- # val= 00:07:37.300 06:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # IFS=: 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # read -r var val 00:07:37.300 06:33:41 -- accel/accel.sh@20 -- # val=0x1 00:07:37.300 06:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # IFS=: 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # read -r var val 00:07:37.300 06:33:41 -- accel/accel.sh@20 -- # val= 00:07:37.300 06:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # IFS=: 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # read -r var val 00:07:37.300 06:33:41 -- accel/accel.sh@20 -- # val= 00:07:37.300 06:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # IFS=: 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # read -r var val 00:07:37.300 06:33:41 -- accel/accel.sh@20 -- # val=xor 00:07:37.300 06:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.300 06:33:41 -- accel/accel.sh@23 -- # accel_opc=xor 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # IFS=: 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # read -r var val 00:07:37.300 06:33:41 -- accel/accel.sh@20 -- # val=3 00:07:37.300 06:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # IFS=: 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # read -r var val 00:07:37.300 06:33:41 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.300 06:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # IFS=: 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # read -r var val 00:07:37.300 06:33:41 -- accel/accel.sh@20 -- # val= 00:07:37.300 06:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # IFS=: 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # read -r var val 00:07:37.300 06:33:41 -- accel/accel.sh@20 -- # val=software 00:07:37.300 06:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.300 06:33:41 -- accel/accel.sh@22 -- # accel_module=software 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # IFS=: 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # read -r var val 00:07:37.300 06:33:41 -- accel/accel.sh@20 -- # val=32 00:07:37.300 06:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # IFS=: 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # read -r var val 00:07:37.300 06:33:41 -- accel/accel.sh@20 -- # val=32 00:07:37.300 06:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # IFS=: 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # read -r var val 00:07:37.300 06:33:41 -- accel/accel.sh@20 -- # val=1 00:07:37.300 06:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # IFS=: 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # read -r var val 00:07:37.300 06:33:41 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.300 06:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # IFS=: 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # read -r var val 00:07:37.300 06:33:41 -- accel/accel.sh@20 -- # val=Yes 00:07:37.300 06:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # IFS=: 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # read -r var val 00:07:37.300 06:33:41 -- accel/accel.sh@20 -- # val= 00:07:37.300 06:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # IFS=: 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # read -r var val 00:07:37.300 06:33:41 -- accel/accel.sh@20 -- # val= 00:07:37.300 06:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # IFS=: 00:07:37.300 06:33:41 -- accel/accel.sh@19 -- # read -r var val 00:07:38.674 06:33:42 -- accel/accel.sh@20 -- # val= 00:07:38.674 06:33:42 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.674 06:33:42 -- accel/accel.sh@19 -- # IFS=: 00:07:38.674 06:33:42 -- accel/accel.sh@19 -- # read -r var val 00:07:38.674 06:33:42 -- accel/accel.sh@20 -- # val= 00:07:38.674 06:33:42 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.674 06:33:42 -- accel/accel.sh@19 -- # IFS=: 00:07:38.674 06:33:42 -- accel/accel.sh@19 -- # read -r var val 00:07:38.674 06:33:42 -- accel/accel.sh@20 -- # val= 00:07:38.674 06:33:42 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.674 06:33:42 -- accel/accel.sh@19 -- # IFS=: 00:07:38.674 06:33:42 -- accel/accel.sh@19 -- # read -r var val 00:07:38.674 06:33:42 -- accel/accel.sh@20 -- # val= 00:07:38.674 06:33:42 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.674 06:33:42 -- accel/accel.sh@19 -- # IFS=: 00:07:38.674 06:33:42 -- accel/accel.sh@19 -- # read -r var val 00:07:38.674 06:33:42 -- accel/accel.sh@20 -- # val= 00:07:38.674 06:33:42 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.674 06:33:42 -- accel/accel.sh@19 -- # IFS=: 00:07:38.674 06:33:42 -- accel/accel.sh@19 -- # read -r var val 00:07:38.674 06:33:42 -- accel/accel.sh@20 -- # val= 00:07:38.674 06:33:42 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.674 06:33:42 -- accel/accel.sh@19 -- # IFS=: 00:07:38.674 06:33:42 -- accel/accel.sh@19 -- # read -r var val 00:07:38.674 06:33:42 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.674 06:33:42 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:38.674 06:33:42 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.674 00:07:38.674 real 0m1.404s 00:07:38.674 user 0m1.260s 00:07:38.674 sys 0m0.144s 00:07:38.674 06:33:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:38.674 06:33:42 -- common/autotest_common.sh@10 -- # set +x 00:07:38.674 ************************************ 00:07:38.674 END TEST accel_xor 00:07:38.674 ************************************ 00:07:38.674 06:33:42 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:38.674 06:33:42 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:38.674 06:33:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.674 06:33:42 -- common/autotest_common.sh@10 -- # set +x 00:07:38.674 ************************************ 00:07:38.674 START TEST accel_dif_verify 00:07:38.674 ************************************ 00:07:38.674 06:33:43 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:07:38.674 06:33:43 -- accel/accel.sh@16 -- # local accel_opc 00:07:38.674 06:33:43 -- accel/accel.sh@17 -- # local accel_module 00:07:38.674 06:33:43 -- accel/accel.sh@19 -- # IFS=: 00:07:38.674 06:33:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:38.674 06:33:43 -- accel/accel.sh@19 -- # read -r var val 00:07:38.674 06:33:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:38.674 06:33:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.674 06:33:43 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.674 06:33:43 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.674 06:33:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.674 06:33:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.674 06:33:43 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.674 06:33:43 -- accel/accel.sh@40 -- # local IFS=, 00:07:38.674 06:33:43 -- accel/accel.sh@41 -- # jq -r . 00:07:38.674 [2024-04-17 06:33:43.030957] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:38.674 [2024-04-17 06:33:43.031025] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4073570 ] 00:07:38.674 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.674 [2024-04-17 06:33:43.095061] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.674 [2024-04-17 06:33:43.186314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.674 [2024-04-17 06:33:43.186984] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:38.674 06:33:43 -- accel/accel.sh@20 -- # val= 00:07:38.674 06:33:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.674 06:33:43 -- accel/accel.sh@19 -- # IFS=: 00:07:38.674 06:33:43 -- accel/accel.sh@19 -- # read -r var val 00:07:38.674 06:33:43 -- accel/accel.sh@20 -- # val= 00:07:38.674 06:33:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.674 06:33:43 -- accel/accel.sh@19 -- # IFS=: 00:07:38.674 06:33:43 -- accel/accel.sh@19 -- # read -r var val 00:07:38.674 06:33:43 -- accel/accel.sh@20 -- # val=0x1 00:07:38.674 06:33:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.674 06:33:43 -- accel/accel.sh@19 -- # IFS=: 00:07:38.674 06:33:43 -- accel/accel.sh@19 -- # read -r var val 00:07:38.674 06:33:43 -- accel/accel.sh@20 -- # val= 00:07:38.674 06:33:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.674 06:33:43 -- accel/accel.sh@19 -- # IFS=: 00:07:38.674 06:33:43 -- accel/accel.sh@19 -- # read -r var val 00:07:38.674 06:33:43 -- accel/accel.sh@20 -- # val= 00:07:38.674 06:33:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.674 06:33:43 -- accel/accel.sh@19 -- # IFS=: 00:07:38.674 06:33:43 -- accel/accel.sh@19 -- # read -r var val 00:07:38.674 06:33:43 -- accel/accel.sh@20 -- # val=dif_verify 00:07:38.674 06:33:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.674 06:33:43 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:38.674 06:33:43 -- accel/accel.sh@19 -- # IFS=: 00:07:38.674 06:33:43 -- accel/accel.sh@19 -- # read -r var val 00:07:38.674 06:33:43 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.674 06:33:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.674 06:33:43 -- accel/accel.sh@19 -- # IFS=: 00:07:38.674 06:33:43 -- accel/accel.sh@19 -- # read -r var val 00:07:38.674 06:33:43 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.674 06:33:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.675 06:33:43 -- accel/accel.sh@19 -- # IFS=: 00:07:38.675 06:33:43 -- accel/accel.sh@19 -- # read -r var val 00:07:38.675 06:33:43 -- accel/accel.sh@20 -- # val='512 bytes' 00:07:38.675 06:33:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.675 06:33:43 -- accel/accel.sh@19 -- # IFS=: 00:07:38.675 06:33:43 -- accel/accel.sh@19 -- # read -r var val 00:07:38.675 06:33:43 -- accel/accel.sh@20 -- # val='8 bytes' 00:07:38.675 06:33:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.675 06:33:43 -- accel/accel.sh@19 -- # IFS=: 00:07:38.675 06:33:43 -- accel/accel.sh@19 -- # read -r var val 00:07:38.675 06:33:43 -- accel/accel.sh@20 -- # val= 00:07:38.675 06:33:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.675 06:33:43 -- accel/accel.sh@19 -- # IFS=: 00:07:38.675 06:33:43 -- accel/accel.sh@19 -- # read -r var val 00:07:38.675 06:33:43 -- accel/accel.sh@20 -- # val=software 00:07:38.675 06:33:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.675 06:33:43 -- accel/accel.sh@22 -- # accel_module=software 00:07:38.675 06:33:43 -- accel/accel.sh@19 -- # IFS=: 00:07:38.675 06:33:43 -- accel/accel.sh@19 -- # read -r var val 00:07:38.675 06:33:43 -- accel/accel.sh@20 -- # val=32 00:07:38.675 06:33:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.675 06:33:43 -- accel/accel.sh@19 -- # IFS=: 00:07:38.675 06:33:43 -- accel/accel.sh@19 -- # read -r var val 00:07:38.675 06:33:43 -- accel/accel.sh@20 -- # val=32 00:07:38.675 06:33:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.675 06:33:43 -- accel/accel.sh@19 -- # IFS=: 00:07:38.675 06:33:43 -- accel/accel.sh@19 -- # read -r var val 00:07:38.675 06:33:43 -- accel/accel.sh@20 -- # val=1 00:07:38.675 06:33:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.675 06:33:43 -- accel/accel.sh@19 -- # IFS=: 00:07:38.675 06:33:43 -- accel/accel.sh@19 -- # read -r var val 00:07:38.675 06:33:43 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.675 06:33:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.675 06:33:43 -- accel/accel.sh@19 -- # IFS=: 00:07:38.675 06:33:43 -- accel/accel.sh@19 -- # read -r var val 00:07:38.675 06:33:43 -- accel/accel.sh@20 -- # val=No 00:07:38.675 06:33:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.675 06:33:43 -- accel/accel.sh@19 -- # IFS=: 00:07:38.675 06:33:43 -- accel/accel.sh@19 -- # read -r var val 00:07:38.675 06:33:43 -- accel/accel.sh@20 -- # val= 00:07:38.675 06:33:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.675 06:33:43 -- accel/accel.sh@19 -- # IFS=: 00:07:38.675 06:33:43 -- accel/accel.sh@19 -- # read -r var val 00:07:38.675 06:33:43 -- accel/accel.sh@20 -- # val= 00:07:38.675 06:33:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.675 06:33:43 -- accel/accel.sh@19 -- # IFS=: 00:07:38.675 06:33:43 -- accel/accel.sh@19 -- # read -r var val 00:07:40.049 06:33:44 -- accel/accel.sh@20 -- # val= 00:07:40.049 06:33:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.049 06:33:44 -- accel/accel.sh@19 -- # IFS=: 00:07:40.049 06:33:44 -- accel/accel.sh@19 -- # read -r var val 00:07:40.049 06:33:44 -- accel/accel.sh@20 -- # val= 00:07:40.049 06:33:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.049 06:33:44 -- accel/accel.sh@19 -- # IFS=: 00:07:40.049 06:33:44 -- accel/accel.sh@19 -- # read -r var val 00:07:40.049 06:33:44 -- accel/accel.sh@20 -- # val= 00:07:40.049 06:33:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.049 06:33:44 -- accel/accel.sh@19 -- # IFS=: 00:07:40.049 06:33:44 -- accel/accel.sh@19 -- # read -r var val 00:07:40.049 06:33:44 -- accel/accel.sh@20 -- # val= 00:07:40.049 06:33:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.049 06:33:44 -- accel/accel.sh@19 -- # IFS=: 00:07:40.049 06:33:44 -- accel/accel.sh@19 -- # read -r var val 00:07:40.049 06:33:44 -- accel/accel.sh@20 -- # val= 00:07:40.049 06:33:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.049 06:33:44 -- accel/accel.sh@19 -- # IFS=: 00:07:40.049 06:33:44 -- accel/accel.sh@19 -- # read -r var val 00:07:40.049 06:33:44 -- accel/accel.sh@20 -- # val= 00:07:40.049 06:33:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.049 06:33:44 -- accel/accel.sh@19 -- # IFS=: 00:07:40.049 06:33:44 -- accel/accel.sh@19 -- # read -r var val 00:07:40.049 06:33:44 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:40.049 06:33:44 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:40.049 06:33:44 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.049 00:07:40.049 real 0m1.412s 00:07:40.049 user 0m1.271s 00:07:40.049 sys 0m0.143s 00:07:40.049 06:33:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:40.049 06:33:44 -- common/autotest_common.sh@10 -- # set +x 00:07:40.049 ************************************ 00:07:40.049 END TEST accel_dif_verify 00:07:40.049 ************************************ 00:07:40.049 06:33:44 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:40.049 06:33:44 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:40.049 06:33:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.049 06:33:44 -- common/autotest_common.sh@10 -- # set +x 00:07:40.049 ************************************ 00:07:40.049 START TEST accel_dif_generate 00:07:40.049 ************************************ 00:07:40.049 06:33:44 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:07:40.049 06:33:44 -- accel/accel.sh@16 -- # local accel_opc 00:07:40.049 06:33:44 -- accel/accel.sh@17 -- # local accel_module 00:07:40.049 06:33:44 -- accel/accel.sh@19 -- # IFS=: 00:07:40.049 06:33:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:40.049 06:33:44 -- accel/accel.sh@19 -- # read -r var val 00:07:40.049 06:33:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:40.049 06:33:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.049 06:33:44 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.049 06:33:44 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.049 06:33:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.049 06:33:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.049 06:33:44 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.049 06:33:44 -- accel/accel.sh@40 -- # local IFS=, 00:07:40.049 06:33:44 -- accel/accel.sh@41 -- # jq -r . 00:07:40.049 [2024-04-17 06:33:44.559808] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:40.049 [2024-04-17 06:33:44.559873] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4073729 ] 00:07:40.049 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.049 [2024-04-17 06:33:44.621548] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.307 [2024-04-17 06:33:44.715571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.307 [2024-04-17 06:33:44.716170] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:40.307 06:33:44 -- accel/accel.sh@20 -- # val= 00:07:40.307 06:33:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.307 06:33:44 -- accel/accel.sh@19 -- # IFS=: 00:07:40.307 06:33:44 -- accel/accel.sh@19 -- # read -r var val 00:07:40.307 06:33:44 -- accel/accel.sh@20 -- # val= 00:07:40.307 06:33:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.307 06:33:44 -- accel/accel.sh@19 -- # IFS=: 00:07:40.307 06:33:44 -- accel/accel.sh@19 -- # read -r var val 00:07:40.307 06:33:44 -- accel/accel.sh@20 -- # val=0x1 00:07:40.307 06:33:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.307 06:33:44 -- accel/accel.sh@19 -- # IFS=: 00:07:40.307 06:33:44 -- accel/accel.sh@19 -- # read -r var val 00:07:40.307 06:33:44 -- accel/accel.sh@20 -- # val= 00:07:40.307 06:33:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.307 06:33:44 -- accel/accel.sh@19 -- # IFS=: 00:07:40.307 06:33:44 -- accel/accel.sh@19 -- # read -r var val 00:07:40.307 06:33:44 -- accel/accel.sh@20 -- # val= 00:07:40.307 06:33:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.307 06:33:44 -- accel/accel.sh@19 -- # IFS=: 00:07:40.307 06:33:44 -- accel/accel.sh@19 -- # read -r var val 00:07:40.307 06:33:44 -- accel/accel.sh@20 -- # val=dif_generate 00:07:40.307 06:33:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.307 06:33:44 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:40.307 06:33:44 -- accel/accel.sh@19 -- # IFS=: 00:07:40.307 06:33:44 -- accel/accel.sh@19 -- # read -r var val 00:07:40.307 06:33:44 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.307 06:33:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.307 06:33:44 -- accel/accel.sh@19 -- # IFS=: 00:07:40.307 06:33:44 -- accel/accel.sh@19 -- # read -r var val 00:07:40.307 06:33:44 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.307 06:33:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.307 06:33:44 -- accel/accel.sh@19 -- # IFS=: 00:07:40.307 06:33:44 -- accel/accel.sh@19 -- # read -r var val 00:07:40.307 06:33:44 -- accel/accel.sh@20 -- # val='512 bytes' 00:07:40.307 06:33:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.307 06:33:44 -- accel/accel.sh@19 -- # IFS=: 00:07:40.307 06:33:44 -- accel/accel.sh@19 -- # read -r var val 00:07:40.307 06:33:44 -- accel/accel.sh@20 -- # val='8 bytes' 00:07:40.307 06:33:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.307 06:33:44 -- accel/accel.sh@19 -- # IFS=: 00:07:40.307 06:33:44 -- accel/accel.sh@19 -- # read -r var val 00:07:40.307 06:33:44 -- accel/accel.sh@20 -- # val= 00:07:40.307 06:33:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.307 06:33:44 -- accel/accel.sh@19 -- # IFS=: 00:07:40.307 06:33:44 -- accel/accel.sh@19 -- # read -r var val 00:07:40.307 06:33:44 -- accel/accel.sh@20 -- # val=software 00:07:40.308 06:33:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.308 06:33:44 -- accel/accel.sh@22 -- # accel_module=software 00:07:40.308 06:33:44 -- accel/accel.sh@19 -- # IFS=: 00:07:40.308 06:33:44 -- accel/accel.sh@19 -- # read -r var val 00:07:40.308 06:33:44 -- accel/accel.sh@20 -- # val=32 00:07:40.308 06:33:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.308 06:33:44 -- accel/accel.sh@19 -- # IFS=: 00:07:40.308 06:33:44 -- accel/accel.sh@19 -- # read -r var val 00:07:40.308 06:33:44 -- accel/accel.sh@20 -- # val=32 00:07:40.308 06:33:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.308 06:33:44 -- accel/accel.sh@19 -- # IFS=: 00:07:40.308 06:33:44 -- accel/accel.sh@19 -- # read -r var val 00:07:40.308 06:33:44 -- accel/accel.sh@20 -- # val=1 00:07:40.308 06:33:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.308 06:33:44 -- accel/accel.sh@19 -- # IFS=: 00:07:40.308 06:33:44 -- accel/accel.sh@19 -- # read -r var val 00:07:40.308 06:33:44 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.308 06:33:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.308 06:33:44 -- accel/accel.sh@19 -- # IFS=: 00:07:40.308 06:33:44 -- accel/accel.sh@19 -- # read -r var val 00:07:40.308 06:33:44 -- accel/accel.sh@20 -- # val=No 00:07:40.308 06:33:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.308 06:33:44 -- accel/accel.sh@19 -- # IFS=: 00:07:40.308 06:33:44 -- accel/accel.sh@19 -- # read -r var val 00:07:40.308 06:33:44 -- accel/accel.sh@20 -- # val= 00:07:40.308 06:33:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.308 06:33:44 -- accel/accel.sh@19 -- # IFS=: 00:07:40.308 06:33:44 -- accel/accel.sh@19 -- # read -r var val 00:07:40.308 06:33:44 -- accel/accel.sh@20 -- # val= 00:07:40.308 06:33:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.308 06:33:44 -- accel/accel.sh@19 -- # IFS=: 00:07:40.308 06:33:44 -- accel/accel.sh@19 -- # read -r var val 00:07:41.682 06:33:45 -- accel/accel.sh@20 -- # val= 00:07:41.682 06:33:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.682 06:33:45 -- accel/accel.sh@19 -- # IFS=: 00:07:41.682 06:33:45 -- accel/accel.sh@19 -- # read -r var val 00:07:41.682 06:33:45 -- accel/accel.sh@20 -- # val= 00:07:41.682 06:33:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.682 06:33:45 -- accel/accel.sh@19 -- # IFS=: 00:07:41.682 06:33:45 -- accel/accel.sh@19 -- # read -r var val 00:07:41.682 06:33:45 -- accel/accel.sh@20 -- # val= 00:07:41.682 06:33:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.682 06:33:45 -- accel/accel.sh@19 -- # IFS=: 00:07:41.682 06:33:45 -- accel/accel.sh@19 -- # read -r var val 00:07:41.682 06:33:45 -- accel/accel.sh@20 -- # val= 00:07:41.682 06:33:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.682 06:33:45 -- accel/accel.sh@19 -- # IFS=: 00:07:41.682 06:33:45 -- accel/accel.sh@19 -- # read -r var val 00:07:41.682 06:33:45 -- accel/accel.sh@20 -- # val= 00:07:41.682 06:33:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.682 06:33:45 -- accel/accel.sh@19 -- # IFS=: 00:07:41.682 06:33:45 -- accel/accel.sh@19 -- # read -r var val 00:07:41.682 06:33:45 -- accel/accel.sh@20 -- # val= 00:07:41.682 06:33:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.682 06:33:45 -- accel/accel.sh@19 -- # IFS=: 00:07:41.682 06:33:45 -- accel/accel.sh@19 -- # read -r var val 00:07:41.682 06:33:45 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.682 06:33:45 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:41.682 06:33:45 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.682 00:07:41.682 real 0m1.396s 00:07:41.682 user 0m1.261s 00:07:41.682 sys 0m0.137s 00:07:41.682 06:33:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:41.682 06:33:45 -- common/autotest_common.sh@10 -- # set +x 00:07:41.682 ************************************ 00:07:41.682 END TEST accel_dif_generate 00:07:41.682 ************************************ 00:07:41.682 06:33:45 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:41.682 06:33:45 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:41.682 06:33:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:41.682 06:33:45 -- common/autotest_common.sh@10 -- # set +x 00:07:41.682 ************************************ 00:07:41.682 START TEST accel_dif_generate_copy 00:07:41.682 ************************************ 00:07:41.682 06:33:46 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:07:41.682 06:33:46 -- accel/accel.sh@16 -- # local accel_opc 00:07:41.682 06:33:46 -- accel/accel.sh@17 -- # local accel_module 00:07:41.682 06:33:46 -- accel/accel.sh@19 -- # IFS=: 00:07:41.682 06:33:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:41.682 06:33:46 -- accel/accel.sh@19 -- # read -r var val 00:07:41.683 06:33:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:41.683 06:33:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.683 06:33:46 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.683 06:33:46 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.683 06:33:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.683 06:33:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.683 06:33:46 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.683 06:33:46 -- accel/accel.sh@40 -- # local IFS=, 00:07:41.683 06:33:46 -- accel/accel.sh@41 -- # jq -r . 00:07:41.683 [2024-04-17 06:33:46.075346] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:41.683 [2024-04-17 06:33:46.075416] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4074011 ] 00:07:41.683 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.683 [2024-04-17 06:33:46.137242] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.683 [2024-04-17 06:33:46.226975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.683 [2024-04-17 06:33:46.227659] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:41.941 06:33:46 -- accel/accel.sh@20 -- # val= 00:07:41.941 06:33:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # IFS=: 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # read -r var val 00:07:41.941 06:33:46 -- accel/accel.sh@20 -- # val= 00:07:41.941 06:33:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # IFS=: 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # read -r var val 00:07:41.941 06:33:46 -- accel/accel.sh@20 -- # val=0x1 00:07:41.941 06:33:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # IFS=: 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # read -r var val 00:07:41.941 06:33:46 -- accel/accel.sh@20 -- # val= 00:07:41.941 06:33:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # IFS=: 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # read -r var val 00:07:41.941 06:33:46 -- accel/accel.sh@20 -- # val= 00:07:41.941 06:33:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # IFS=: 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # read -r var val 00:07:41.941 06:33:46 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:41.941 06:33:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.941 06:33:46 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # IFS=: 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # read -r var val 00:07:41.941 06:33:46 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.941 06:33:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # IFS=: 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # read -r var val 00:07:41.941 06:33:46 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.941 06:33:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # IFS=: 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # read -r var val 00:07:41.941 06:33:46 -- accel/accel.sh@20 -- # val= 00:07:41.941 06:33:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # IFS=: 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # read -r var val 00:07:41.941 06:33:46 -- accel/accel.sh@20 -- # val=software 00:07:41.941 06:33:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.941 06:33:46 -- accel/accel.sh@22 -- # accel_module=software 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # IFS=: 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # read -r var val 00:07:41.941 06:33:46 -- accel/accel.sh@20 -- # val=32 00:07:41.941 06:33:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # IFS=: 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # read -r var val 00:07:41.941 06:33:46 -- accel/accel.sh@20 -- # val=32 00:07:41.941 06:33:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # IFS=: 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # read -r var val 00:07:41.941 06:33:46 -- accel/accel.sh@20 -- # val=1 00:07:41.941 06:33:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # IFS=: 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # read -r var val 00:07:41.941 06:33:46 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.941 06:33:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # IFS=: 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # read -r var val 00:07:41.941 06:33:46 -- accel/accel.sh@20 -- # val=No 00:07:41.941 06:33:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # IFS=: 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # read -r var val 00:07:41.941 06:33:46 -- accel/accel.sh@20 -- # val= 00:07:41.941 06:33:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # IFS=: 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # read -r var val 00:07:41.941 06:33:46 -- accel/accel.sh@20 -- # val= 00:07:41.941 06:33:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # IFS=: 00:07:41.941 06:33:46 -- accel/accel.sh@19 -- # read -r var val 00:07:42.875 06:33:47 -- accel/accel.sh@20 -- # val= 00:07:42.875 06:33:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.875 06:33:47 -- accel/accel.sh@19 -- # IFS=: 00:07:42.875 06:33:47 -- accel/accel.sh@19 -- # read -r var val 00:07:42.875 06:33:47 -- accel/accel.sh@20 -- # val= 00:07:42.875 06:33:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.875 06:33:47 -- accel/accel.sh@19 -- # IFS=: 00:07:42.875 06:33:47 -- accel/accel.sh@19 -- # read -r var val 00:07:42.875 06:33:47 -- accel/accel.sh@20 -- # val= 00:07:42.875 06:33:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.875 06:33:47 -- accel/accel.sh@19 -- # IFS=: 00:07:42.875 06:33:47 -- accel/accel.sh@19 -- # read -r var val 00:07:42.875 06:33:47 -- accel/accel.sh@20 -- # val= 00:07:42.875 06:33:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.875 06:33:47 -- accel/accel.sh@19 -- # IFS=: 00:07:42.875 06:33:47 -- accel/accel.sh@19 -- # read -r var val 00:07:42.875 06:33:47 -- accel/accel.sh@20 -- # val= 00:07:42.875 06:33:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.875 06:33:47 -- accel/accel.sh@19 -- # IFS=: 00:07:42.875 06:33:47 -- accel/accel.sh@19 -- # read -r var val 00:07:42.875 06:33:47 -- accel/accel.sh@20 -- # val= 00:07:42.875 06:33:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.875 06:33:47 -- accel/accel.sh@19 -- # IFS=: 00:07:42.875 06:33:47 -- accel/accel.sh@19 -- # read -r var val 00:07:42.875 06:33:47 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:42.875 06:33:47 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:42.875 06:33:47 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.875 00:07:42.875 real 0m1.403s 00:07:42.875 user 0m1.263s 00:07:42.875 sys 0m0.141s 00:07:42.875 06:33:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:42.875 06:33:47 -- common/autotest_common.sh@10 -- # set +x 00:07:42.875 ************************************ 00:07:42.875 END TEST accel_dif_generate_copy 00:07:42.875 ************************************ 00:07:43.133 06:33:47 -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:43.133 06:33:47 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:43.133 06:33:47 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:43.133 06:33:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.133 06:33:47 -- common/autotest_common.sh@10 -- # set +x 00:07:43.133 ************************************ 00:07:43.133 START TEST accel_comp 00:07:43.133 ************************************ 00:07:43.133 06:33:47 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:43.133 06:33:47 -- accel/accel.sh@16 -- # local accel_opc 00:07:43.133 06:33:47 -- accel/accel.sh@17 -- # local accel_module 00:07:43.133 06:33:47 -- accel/accel.sh@19 -- # IFS=: 00:07:43.133 06:33:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:43.133 06:33:47 -- accel/accel.sh@19 -- # read -r var val 00:07:43.133 06:33:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:43.133 06:33:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.133 06:33:47 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.133 06:33:47 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.133 06:33:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.133 06:33:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.133 06:33:47 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.133 06:33:47 -- accel/accel.sh@40 -- # local IFS=, 00:07:43.133 06:33:47 -- accel/accel.sh@41 -- # jq -r . 00:07:43.133 [2024-04-17 06:33:47.599441] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:43.134 [2024-04-17 06:33:47.599527] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4074175 ] 00:07:43.134 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.134 [2024-04-17 06:33:47.665637] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.392 [2024-04-17 06:33:47.760056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.392 [2024-04-17 06:33:47.760690] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:43.392 06:33:47 -- accel/accel.sh@20 -- # val= 00:07:43.392 06:33:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # IFS=: 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # read -r var val 00:07:43.392 06:33:47 -- accel/accel.sh@20 -- # val= 00:07:43.392 06:33:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # IFS=: 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # read -r var val 00:07:43.392 06:33:47 -- accel/accel.sh@20 -- # val= 00:07:43.392 06:33:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # IFS=: 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # read -r var val 00:07:43.392 06:33:47 -- accel/accel.sh@20 -- # val=0x1 00:07:43.392 06:33:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # IFS=: 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # read -r var val 00:07:43.392 06:33:47 -- accel/accel.sh@20 -- # val= 00:07:43.392 06:33:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # IFS=: 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # read -r var val 00:07:43.392 06:33:47 -- accel/accel.sh@20 -- # val= 00:07:43.392 06:33:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # IFS=: 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # read -r var val 00:07:43.392 06:33:47 -- accel/accel.sh@20 -- # val=compress 00:07:43.392 06:33:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.392 06:33:47 -- accel/accel.sh@23 -- # accel_opc=compress 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # IFS=: 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # read -r var val 00:07:43.392 06:33:47 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:43.392 06:33:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # IFS=: 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # read -r var val 00:07:43.392 06:33:47 -- accel/accel.sh@20 -- # val= 00:07:43.392 06:33:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # IFS=: 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # read -r var val 00:07:43.392 06:33:47 -- accel/accel.sh@20 -- # val=software 00:07:43.392 06:33:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.392 06:33:47 -- accel/accel.sh@22 -- # accel_module=software 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # IFS=: 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # read -r var val 00:07:43.392 06:33:47 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:43.392 06:33:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # IFS=: 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # read -r var val 00:07:43.392 06:33:47 -- accel/accel.sh@20 -- # val=32 00:07:43.392 06:33:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # IFS=: 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # read -r var val 00:07:43.392 06:33:47 -- accel/accel.sh@20 -- # val=32 00:07:43.392 06:33:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # IFS=: 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # read -r var val 00:07:43.392 06:33:47 -- accel/accel.sh@20 -- # val=1 00:07:43.392 06:33:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # IFS=: 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # read -r var val 00:07:43.392 06:33:47 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:43.392 06:33:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # IFS=: 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # read -r var val 00:07:43.392 06:33:47 -- accel/accel.sh@20 -- # val=No 00:07:43.392 06:33:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # IFS=: 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # read -r var val 00:07:43.392 06:33:47 -- accel/accel.sh@20 -- # val= 00:07:43.392 06:33:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # IFS=: 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # read -r var val 00:07:43.392 06:33:47 -- accel/accel.sh@20 -- # val= 00:07:43.392 06:33:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # IFS=: 00:07:43.392 06:33:47 -- accel/accel.sh@19 -- # read -r var val 00:07:44.766 06:33:48 -- accel/accel.sh@20 -- # val= 00:07:44.766 06:33:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.766 06:33:48 -- accel/accel.sh@19 -- # IFS=: 00:07:44.766 06:33:48 -- accel/accel.sh@19 -- # read -r var val 00:07:44.766 06:33:48 -- accel/accel.sh@20 -- # val= 00:07:44.766 06:33:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.766 06:33:48 -- accel/accel.sh@19 -- # IFS=: 00:07:44.766 06:33:48 -- accel/accel.sh@19 -- # read -r var val 00:07:44.766 06:33:48 -- accel/accel.sh@20 -- # val= 00:07:44.766 06:33:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.766 06:33:48 -- accel/accel.sh@19 -- # IFS=: 00:07:44.766 06:33:48 -- accel/accel.sh@19 -- # read -r var val 00:07:44.766 06:33:48 -- accel/accel.sh@20 -- # val= 00:07:44.766 06:33:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.766 06:33:48 -- accel/accel.sh@19 -- # IFS=: 00:07:44.766 06:33:48 -- accel/accel.sh@19 -- # read -r var val 00:07:44.766 06:33:48 -- accel/accel.sh@20 -- # val= 00:07:44.766 06:33:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.766 06:33:48 -- accel/accel.sh@19 -- # IFS=: 00:07:44.766 06:33:48 -- accel/accel.sh@19 -- # read -r var val 00:07:44.766 06:33:48 -- accel/accel.sh@20 -- # val= 00:07:44.766 06:33:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.766 06:33:48 -- accel/accel.sh@19 -- # IFS=: 00:07:44.766 06:33:48 -- accel/accel.sh@19 -- # read -r var val 00:07:44.766 06:33:48 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.766 06:33:48 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:44.766 06:33:48 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.766 00:07:44.766 real 0m1.406s 00:07:44.766 user 0m1.261s 00:07:44.766 sys 0m0.146s 00:07:44.766 06:33:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:44.766 06:33:48 -- common/autotest_common.sh@10 -- # set +x 00:07:44.766 ************************************ 00:07:44.766 END TEST accel_comp 00:07:44.766 ************************************ 00:07:44.766 06:33:49 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:44.766 06:33:49 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:44.766 06:33:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:44.766 06:33:49 -- common/autotest_common.sh@10 -- # set +x 00:07:44.766 ************************************ 00:07:44.766 START TEST accel_decomp 00:07:44.766 ************************************ 00:07:44.766 06:33:49 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:44.766 06:33:49 -- accel/accel.sh@16 -- # local accel_opc 00:07:44.766 06:33:49 -- accel/accel.sh@17 -- # local accel_module 00:07:44.766 06:33:49 -- accel/accel.sh@19 -- # IFS=: 00:07:44.766 06:33:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:44.766 06:33:49 -- accel/accel.sh@19 -- # read -r var val 00:07:44.766 06:33:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:44.766 06:33:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.766 06:33:49 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.766 06:33:49 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.766 06:33:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.766 06:33:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.766 06:33:49 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.766 06:33:49 -- accel/accel.sh@40 -- # local IFS=, 00:07:44.766 06:33:49 -- accel/accel.sh@41 -- # jq -r . 00:07:44.766 [2024-04-17 06:33:49.132456] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:44.766 [2024-04-17 06:33:49.132517] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4074341 ] 00:07:44.766 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.766 [2024-04-17 06:33:49.195434] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.766 [2024-04-17 06:33:49.284648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.766 [2024-04-17 06:33:49.285300] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:44.766 06:33:49 -- accel/accel.sh@20 -- # val= 00:07:44.766 06:33:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.766 06:33:49 -- accel/accel.sh@19 -- # IFS=: 00:07:44.766 06:33:49 -- accel/accel.sh@19 -- # read -r var val 00:07:44.766 06:33:49 -- accel/accel.sh@20 -- # val= 00:07:44.766 06:33:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.766 06:33:49 -- accel/accel.sh@19 -- # IFS=: 00:07:44.766 06:33:49 -- accel/accel.sh@19 -- # read -r var val 00:07:44.766 06:33:49 -- accel/accel.sh@20 -- # val= 00:07:44.766 06:33:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.766 06:33:49 -- accel/accel.sh@19 -- # IFS=: 00:07:44.766 06:33:49 -- accel/accel.sh@19 -- # read -r var val 00:07:44.766 06:33:49 -- accel/accel.sh@20 -- # val=0x1 00:07:44.766 06:33:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.766 06:33:49 -- accel/accel.sh@19 -- # IFS=: 00:07:44.766 06:33:49 -- accel/accel.sh@19 -- # read -r var val 00:07:44.766 06:33:49 -- accel/accel.sh@20 -- # val= 00:07:44.766 06:33:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.766 06:33:49 -- accel/accel.sh@19 -- # IFS=: 00:07:44.766 06:33:49 -- accel/accel.sh@19 -- # read -r var val 00:07:44.766 06:33:49 -- accel/accel.sh@20 -- # val= 00:07:44.766 06:33:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.766 06:33:49 -- accel/accel.sh@19 -- # IFS=: 00:07:44.766 06:33:49 -- accel/accel.sh@19 -- # read -r var val 00:07:44.766 06:33:49 -- accel/accel.sh@20 -- # val=decompress 00:07:44.766 06:33:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.766 06:33:49 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:44.766 06:33:49 -- accel/accel.sh@19 -- # IFS=: 00:07:44.766 06:33:49 -- accel/accel.sh@19 -- # read -r var val 00:07:44.766 06:33:49 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.766 06:33:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.766 06:33:49 -- accel/accel.sh@19 -- # IFS=: 00:07:44.766 06:33:49 -- accel/accel.sh@19 -- # read -r var val 00:07:44.766 06:33:49 -- accel/accel.sh@20 -- # val= 00:07:44.766 06:33:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.766 06:33:49 -- accel/accel.sh@19 -- # IFS=: 00:07:44.766 06:33:49 -- accel/accel.sh@19 -- # read -r var val 00:07:44.766 06:33:49 -- accel/accel.sh@20 -- # val=software 00:07:44.766 06:33:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.766 06:33:49 -- accel/accel.sh@22 -- # accel_module=software 00:07:44.766 06:33:49 -- accel/accel.sh@19 -- # IFS=: 00:07:44.766 06:33:49 -- accel/accel.sh@19 -- # read -r var val 00:07:44.766 06:33:49 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:44.766 06:33:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.766 06:33:49 -- accel/accel.sh@19 -- # IFS=: 00:07:44.766 06:33:49 -- accel/accel.sh@19 -- # read -r var val 00:07:44.766 06:33:49 -- accel/accel.sh@20 -- # val=32 00:07:44.766 06:33:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.766 06:33:49 -- accel/accel.sh@19 -- # IFS=: 00:07:44.767 06:33:49 -- accel/accel.sh@19 -- # read -r var val 00:07:44.767 06:33:49 -- accel/accel.sh@20 -- # val=32 00:07:44.767 06:33:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.767 06:33:49 -- accel/accel.sh@19 -- # IFS=: 00:07:44.767 06:33:49 -- accel/accel.sh@19 -- # read -r var val 00:07:44.767 06:33:49 -- accel/accel.sh@20 -- # val=1 00:07:44.767 06:33:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.767 06:33:49 -- accel/accel.sh@19 -- # IFS=: 00:07:44.767 06:33:49 -- accel/accel.sh@19 -- # read -r var val 00:07:44.767 06:33:49 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.767 06:33:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.767 06:33:49 -- accel/accel.sh@19 -- # IFS=: 00:07:44.767 06:33:49 -- accel/accel.sh@19 -- # read -r var val 00:07:44.767 06:33:49 -- accel/accel.sh@20 -- # val=Yes 00:07:44.767 06:33:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.767 06:33:49 -- accel/accel.sh@19 -- # IFS=: 00:07:44.767 06:33:49 -- accel/accel.sh@19 -- # read -r var val 00:07:44.767 06:33:49 -- accel/accel.sh@20 -- # val= 00:07:44.767 06:33:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.767 06:33:49 -- accel/accel.sh@19 -- # IFS=: 00:07:44.767 06:33:49 -- accel/accel.sh@19 -- # read -r var val 00:07:44.767 06:33:49 -- accel/accel.sh@20 -- # val= 00:07:44.767 06:33:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.767 06:33:49 -- accel/accel.sh@19 -- # IFS=: 00:07:44.767 06:33:49 -- accel/accel.sh@19 -- # read -r var val 00:07:46.139 06:33:50 -- accel/accel.sh@20 -- # val= 00:07:46.139 06:33:50 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.139 06:33:50 -- accel/accel.sh@19 -- # IFS=: 00:07:46.139 06:33:50 -- accel/accel.sh@19 -- # read -r var val 00:07:46.139 06:33:50 -- accel/accel.sh@20 -- # val= 00:07:46.139 06:33:50 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.139 06:33:50 -- accel/accel.sh@19 -- # IFS=: 00:07:46.139 06:33:50 -- accel/accel.sh@19 -- # read -r var val 00:07:46.139 06:33:50 -- accel/accel.sh@20 -- # val= 00:07:46.139 06:33:50 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.139 06:33:50 -- accel/accel.sh@19 -- # IFS=: 00:07:46.139 06:33:50 -- accel/accel.sh@19 -- # read -r var val 00:07:46.139 06:33:50 -- accel/accel.sh@20 -- # val= 00:07:46.139 06:33:50 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.139 06:33:50 -- accel/accel.sh@19 -- # IFS=: 00:07:46.139 06:33:50 -- accel/accel.sh@19 -- # read -r var val 00:07:46.139 06:33:50 -- accel/accel.sh@20 -- # val= 00:07:46.139 06:33:50 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.139 06:33:50 -- accel/accel.sh@19 -- # IFS=: 00:07:46.139 06:33:50 -- accel/accel.sh@19 -- # read -r var val 00:07:46.139 06:33:50 -- accel/accel.sh@20 -- # val= 00:07:46.139 06:33:50 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.139 06:33:50 -- accel/accel.sh@19 -- # IFS=: 00:07:46.139 06:33:50 -- accel/accel.sh@19 -- # read -r var val 00:07:46.139 06:33:50 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:46.139 06:33:50 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:46.139 06:33:50 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.139 00:07:46.139 real 0m1.395s 00:07:46.139 user 0m1.255s 00:07:46.139 sys 0m0.143s 00:07:46.139 06:33:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:46.139 06:33:50 -- common/autotest_common.sh@10 -- # set +x 00:07:46.139 ************************************ 00:07:46.139 END TEST accel_decomp 00:07:46.139 ************************************ 00:07:46.139 06:33:50 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:46.139 06:33:50 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:46.139 06:33:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.139 06:33:50 -- common/autotest_common.sh@10 -- # set +x 00:07:46.139 ************************************ 00:07:46.139 START TEST accel_decmop_full 00:07:46.139 ************************************ 00:07:46.139 06:33:50 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:46.139 06:33:50 -- accel/accel.sh@16 -- # local accel_opc 00:07:46.139 06:33:50 -- accel/accel.sh@17 -- # local accel_module 00:07:46.139 06:33:50 -- accel/accel.sh@19 -- # IFS=: 00:07:46.140 06:33:50 -- accel/accel.sh@19 -- # read -r var val 00:07:46.140 06:33:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:46.140 06:33:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:46.140 06:33:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:46.140 06:33:50 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.140 06:33:50 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.140 06:33:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.140 06:33:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.140 06:33:50 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.140 06:33:50 -- accel/accel.sh@40 -- # local IFS=, 00:07:46.140 06:33:50 -- accel/accel.sh@41 -- # jq -r . 00:07:46.140 [2024-04-17 06:33:50.649871] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:46.140 [2024-04-17 06:33:50.649931] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4074613 ] 00:07:46.140 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.140 [2024-04-17 06:33:50.713817] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.398 [2024-04-17 06:33:50.813396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.398 [2024-04-17 06:33:50.814077] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:46.398 06:33:50 -- accel/accel.sh@20 -- # val= 00:07:46.398 06:33:50 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # IFS=: 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # read -r var val 00:07:46.398 06:33:50 -- accel/accel.sh@20 -- # val= 00:07:46.398 06:33:50 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # IFS=: 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # read -r var val 00:07:46.398 06:33:50 -- accel/accel.sh@20 -- # val= 00:07:46.398 06:33:50 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # IFS=: 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # read -r var val 00:07:46.398 06:33:50 -- accel/accel.sh@20 -- # val=0x1 00:07:46.398 06:33:50 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # IFS=: 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # read -r var val 00:07:46.398 06:33:50 -- accel/accel.sh@20 -- # val= 00:07:46.398 06:33:50 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # IFS=: 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # read -r var val 00:07:46.398 06:33:50 -- accel/accel.sh@20 -- # val= 00:07:46.398 06:33:50 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # IFS=: 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # read -r var val 00:07:46.398 06:33:50 -- accel/accel.sh@20 -- # val=decompress 00:07:46.398 06:33:50 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.398 06:33:50 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # IFS=: 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # read -r var val 00:07:46.398 06:33:50 -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:46.398 06:33:50 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # IFS=: 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # read -r var val 00:07:46.398 06:33:50 -- accel/accel.sh@20 -- # val= 00:07:46.398 06:33:50 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # IFS=: 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # read -r var val 00:07:46.398 06:33:50 -- accel/accel.sh@20 -- # val=software 00:07:46.398 06:33:50 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.398 06:33:50 -- accel/accel.sh@22 -- # accel_module=software 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # IFS=: 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # read -r var val 00:07:46.398 06:33:50 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:46.398 06:33:50 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # IFS=: 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # read -r var val 00:07:46.398 06:33:50 -- accel/accel.sh@20 -- # val=32 00:07:46.398 06:33:50 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # IFS=: 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # read -r var val 00:07:46.398 06:33:50 -- accel/accel.sh@20 -- # val=32 00:07:46.398 06:33:50 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # IFS=: 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # read -r var val 00:07:46.398 06:33:50 -- accel/accel.sh@20 -- # val=1 00:07:46.398 06:33:50 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # IFS=: 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # read -r var val 00:07:46.398 06:33:50 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:46.398 06:33:50 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # IFS=: 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # read -r var val 00:07:46.398 06:33:50 -- accel/accel.sh@20 -- # val=Yes 00:07:46.398 06:33:50 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # IFS=: 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # read -r var val 00:07:46.398 06:33:50 -- accel/accel.sh@20 -- # val= 00:07:46.398 06:33:50 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # IFS=: 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # read -r var val 00:07:46.398 06:33:50 -- accel/accel.sh@20 -- # val= 00:07:46.398 06:33:50 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # IFS=: 00:07:46.398 06:33:50 -- accel/accel.sh@19 -- # read -r var val 00:07:47.771 06:33:52 -- accel/accel.sh@20 -- # val= 00:07:47.771 06:33:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.771 06:33:52 -- accel/accel.sh@19 -- # IFS=: 00:07:47.771 06:33:52 -- accel/accel.sh@19 -- # read -r var val 00:07:47.771 06:33:52 -- accel/accel.sh@20 -- # val= 00:07:47.771 06:33:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.771 06:33:52 -- accel/accel.sh@19 -- # IFS=: 00:07:47.771 06:33:52 -- accel/accel.sh@19 -- # read -r var val 00:07:47.771 06:33:52 -- accel/accel.sh@20 -- # val= 00:07:47.771 06:33:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.771 06:33:52 -- accel/accel.sh@19 -- # IFS=: 00:07:47.771 06:33:52 -- accel/accel.sh@19 -- # read -r var val 00:07:47.771 06:33:52 -- accel/accel.sh@20 -- # val= 00:07:47.771 06:33:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.771 06:33:52 -- accel/accel.sh@19 -- # IFS=: 00:07:47.771 06:33:52 -- accel/accel.sh@19 -- # read -r var val 00:07:47.771 06:33:52 -- accel/accel.sh@20 -- # val= 00:07:47.771 06:33:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.771 06:33:52 -- accel/accel.sh@19 -- # IFS=: 00:07:47.771 06:33:52 -- accel/accel.sh@19 -- # read -r var val 00:07:47.771 06:33:52 -- accel/accel.sh@20 -- # val= 00:07:47.771 06:33:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.771 06:33:52 -- accel/accel.sh@19 -- # IFS=: 00:07:47.771 06:33:52 -- accel/accel.sh@19 -- # read -r var val 00:07:47.771 06:33:52 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:47.771 06:33:52 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:47.771 06:33:52 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.771 00:07:47.771 real 0m1.433s 00:07:47.771 user 0m1.287s 00:07:47.771 sys 0m0.148s 00:07:47.771 06:33:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:47.771 06:33:52 -- common/autotest_common.sh@10 -- # set +x 00:07:47.771 ************************************ 00:07:47.771 END TEST accel_decmop_full 00:07:47.771 ************************************ 00:07:47.771 06:33:52 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:47.771 06:33:52 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:47.771 06:33:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.771 06:33:52 -- common/autotest_common.sh@10 -- # set +x 00:07:47.771 ************************************ 00:07:47.771 START TEST accel_decomp_mcore 00:07:47.771 ************************************ 00:07:47.771 06:33:52 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:47.771 06:33:52 -- accel/accel.sh@16 -- # local accel_opc 00:07:47.771 06:33:52 -- accel/accel.sh@17 -- # local accel_module 00:07:47.771 06:33:52 -- accel/accel.sh@19 -- # IFS=: 00:07:47.771 06:33:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:47.771 06:33:52 -- accel/accel.sh@19 -- # read -r var val 00:07:47.771 06:33:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:47.771 06:33:52 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.771 06:33:52 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.771 06:33:52 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.771 06:33:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.771 06:33:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.771 06:33:52 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.771 06:33:52 -- accel/accel.sh@40 -- # local IFS=, 00:07:47.771 06:33:52 -- accel/accel.sh@41 -- # jq -r . 00:07:47.771 [2024-04-17 06:33:52.204663] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:47.772 [2024-04-17 06:33:52.204724] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4074790 ] 00:07:47.772 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.772 [2024-04-17 06:33:52.269505] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.772 [2024-04-17 06:33:52.366312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.772 [2024-04-17 06:33:52.366402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.772 [2024-04-17 06:33:52.366365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.772 [2024-04-17 06:33:52.366405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.772 [2024-04-17 06:33:52.367268] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:48.030 06:33:52 -- accel/accel.sh@20 -- # val= 00:07:48.030 06:33:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # IFS=: 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # read -r var val 00:07:48.030 06:33:52 -- accel/accel.sh@20 -- # val= 00:07:48.030 06:33:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # IFS=: 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # read -r var val 00:07:48.030 06:33:52 -- accel/accel.sh@20 -- # val= 00:07:48.030 06:33:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # IFS=: 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # read -r var val 00:07:48.030 06:33:52 -- accel/accel.sh@20 -- # val=0xf 00:07:48.030 06:33:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # IFS=: 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # read -r var val 00:07:48.030 06:33:52 -- accel/accel.sh@20 -- # val= 00:07:48.030 06:33:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # IFS=: 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # read -r var val 00:07:48.030 06:33:52 -- accel/accel.sh@20 -- # val= 00:07:48.030 06:33:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # IFS=: 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # read -r var val 00:07:48.030 06:33:52 -- accel/accel.sh@20 -- # val=decompress 00:07:48.030 06:33:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.030 06:33:52 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # IFS=: 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # read -r var val 00:07:48.030 06:33:52 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:48.030 06:33:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # IFS=: 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # read -r var val 00:07:48.030 06:33:52 -- accel/accel.sh@20 -- # val= 00:07:48.030 06:33:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # IFS=: 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # read -r var val 00:07:48.030 06:33:52 -- accel/accel.sh@20 -- # val=software 00:07:48.030 06:33:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.030 06:33:52 -- accel/accel.sh@22 -- # accel_module=software 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # IFS=: 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # read -r var val 00:07:48.030 06:33:52 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:48.030 06:33:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # IFS=: 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # read -r var val 00:07:48.030 06:33:52 -- accel/accel.sh@20 -- # val=32 00:07:48.030 06:33:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # IFS=: 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # read -r var val 00:07:48.030 06:33:52 -- accel/accel.sh@20 -- # val=32 00:07:48.030 06:33:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # IFS=: 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # read -r var val 00:07:48.030 06:33:52 -- accel/accel.sh@20 -- # val=1 00:07:48.030 06:33:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # IFS=: 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # read -r var val 00:07:48.030 06:33:52 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:48.030 06:33:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # IFS=: 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # read -r var val 00:07:48.030 06:33:52 -- accel/accel.sh@20 -- # val=Yes 00:07:48.030 06:33:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # IFS=: 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # read -r var val 00:07:48.030 06:33:52 -- accel/accel.sh@20 -- # val= 00:07:48.030 06:33:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # IFS=: 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # read -r var val 00:07:48.030 06:33:52 -- accel/accel.sh@20 -- # val= 00:07:48.030 06:33:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # IFS=: 00:07:48.030 06:33:52 -- accel/accel.sh@19 -- # read -r var val 00:07:49.404 06:33:53 -- accel/accel.sh@20 -- # val= 00:07:49.404 06:33:53 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.404 06:33:53 -- accel/accel.sh@19 -- # IFS=: 00:07:49.404 06:33:53 -- accel/accel.sh@19 -- # read -r var val 00:07:49.404 06:33:53 -- accel/accel.sh@20 -- # val= 00:07:49.404 06:33:53 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.404 06:33:53 -- accel/accel.sh@19 -- # IFS=: 00:07:49.404 06:33:53 -- accel/accel.sh@19 -- # read -r var val 00:07:49.404 06:33:53 -- accel/accel.sh@20 -- # val= 00:07:49.404 06:33:53 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.404 06:33:53 -- accel/accel.sh@19 -- # IFS=: 00:07:49.404 06:33:53 -- accel/accel.sh@19 -- # read -r var val 00:07:49.404 06:33:53 -- accel/accel.sh@20 -- # val= 00:07:49.404 06:33:53 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.404 06:33:53 -- accel/accel.sh@19 -- # IFS=: 00:07:49.404 06:33:53 -- accel/accel.sh@19 -- # read -r var val 00:07:49.404 06:33:53 -- accel/accel.sh@20 -- # val= 00:07:49.404 06:33:53 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.404 06:33:53 -- accel/accel.sh@19 -- # IFS=: 00:07:49.404 06:33:53 -- accel/accel.sh@19 -- # read -r var val 00:07:49.404 06:33:53 -- accel/accel.sh@20 -- # val= 00:07:49.404 06:33:53 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.404 06:33:53 -- accel/accel.sh@19 -- # IFS=: 00:07:49.404 06:33:53 -- accel/accel.sh@19 -- # read -r var val 00:07:49.404 06:33:53 -- accel/accel.sh@20 -- # val= 00:07:49.404 06:33:53 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.404 06:33:53 -- accel/accel.sh@19 -- # IFS=: 00:07:49.404 06:33:53 -- accel/accel.sh@19 -- # read -r var val 00:07:49.404 06:33:53 -- accel/accel.sh@20 -- # val= 00:07:49.404 06:33:53 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.404 06:33:53 -- accel/accel.sh@19 -- # IFS=: 00:07:49.404 06:33:53 -- accel/accel.sh@19 -- # read -r var val 00:07:49.404 06:33:53 -- accel/accel.sh@20 -- # val= 00:07:49.404 06:33:53 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.404 06:33:53 -- accel/accel.sh@19 -- # IFS=: 00:07:49.404 06:33:53 -- accel/accel.sh@19 -- # read -r var val 00:07:49.404 06:33:53 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.404 06:33:53 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:49.404 06:33:53 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.404 00:07:49.404 real 0m1.411s 00:07:49.404 user 0m4.689s 00:07:49.404 sys 0m0.150s 00:07:49.404 06:33:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:49.404 06:33:53 -- common/autotest_common.sh@10 -- # set +x 00:07:49.404 ************************************ 00:07:49.404 END TEST accel_decomp_mcore 00:07:49.404 ************************************ 00:07:49.404 06:33:53 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:49.404 06:33:53 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:49.404 06:33:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.404 06:33:53 -- common/autotest_common.sh@10 -- # set +x 00:07:49.405 ************************************ 00:07:49.405 START TEST accel_decomp_full_mcore 00:07:49.405 ************************************ 00:07:49.405 06:33:53 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:49.405 06:33:53 -- accel/accel.sh@16 -- # local accel_opc 00:07:49.405 06:33:53 -- accel/accel.sh@17 -- # local accel_module 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # IFS=: 00:07:49.405 06:33:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # read -r var val 00:07:49.405 06:33:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:49.405 06:33:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:49.405 06:33:53 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.405 06:33:53 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.405 06:33:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.405 06:33:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.405 06:33:53 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.405 06:33:53 -- accel/accel.sh@40 -- # local IFS=, 00:07:49.405 06:33:53 -- accel/accel.sh@41 -- # jq -r . 00:07:49.405 [2024-04-17 06:33:53.739612] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:49.405 [2024-04-17 06:33:53.739675] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4074953 ] 00:07:49.405 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.405 [2024-04-17 06:33:53.804093] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:49.405 [2024-04-17 06:33:53.896907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.405 [2024-04-17 06:33:53.896963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.405 [2024-04-17 06:33:53.897075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:49.405 [2024-04-17 06:33:53.897078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.405 [2024-04-17 06:33:53.897759] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:49.405 06:33:53 -- accel/accel.sh@20 -- # val= 00:07:49.405 06:33:53 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # IFS=: 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # read -r var val 00:07:49.405 06:33:53 -- accel/accel.sh@20 -- # val= 00:07:49.405 06:33:53 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # IFS=: 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # read -r var val 00:07:49.405 06:33:53 -- accel/accel.sh@20 -- # val= 00:07:49.405 06:33:53 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # IFS=: 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # read -r var val 00:07:49.405 06:33:53 -- accel/accel.sh@20 -- # val=0xf 00:07:49.405 06:33:53 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # IFS=: 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # read -r var val 00:07:49.405 06:33:53 -- accel/accel.sh@20 -- # val= 00:07:49.405 06:33:53 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # IFS=: 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # read -r var val 00:07:49.405 06:33:53 -- accel/accel.sh@20 -- # val= 00:07:49.405 06:33:53 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # IFS=: 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # read -r var val 00:07:49.405 06:33:53 -- accel/accel.sh@20 -- # val=decompress 00:07:49.405 06:33:53 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.405 06:33:53 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # IFS=: 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # read -r var val 00:07:49.405 06:33:53 -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:49.405 06:33:53 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # IFS=: 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # read -r var val 00:07:49.405 06:33:53 -- accel/accel.sh@20 -- # val= 00:07:49.405 06:33:53 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # IFS=: 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # read -r var val 00:07:49.405 06:33:53 -- accel/accel.sh@20 -- # val=software 00:07:49.405 06:33:53 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.405 06:33:53 -- accel/accel.sh@22 -- # accel_module=software 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # IFS=: 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # read -r var val 00:07:49.405 06:33:53 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:49.405 06:33:53 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # IFS=: 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # read -r var val 00:07:49.405 06:33:53 -- accel/accel.sh@20 -- # val=32 00:07:49.405 06:33:53 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # IFS=: 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # read -r var val 00:07:49.405 06:33:53 -- accel/accel.sh@20 -- # val=32 00:07:49.405 06:33:53 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # IFS=: 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # read -r var val 00:07:49.405 06:33:53 -- accel/accel.sh@20 -- # val=1 00:07:49.405 06:33:53 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # IFS=: 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # read -r var val 00:07:49.405 06:33:53 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.405 06:33:53 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # IFS=: 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # read -r var val 00:07:49.405 06:33:53 -- accel/accel.sh@20 -- # val=Yes 00:07:49.405 06:33:53 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # IFS=: 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # read -r var val 00:07:49.405 06:33:53 -- accel/accel.sh@20 -- # val= 00:07:49.405 06:33:53 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # IFS=: 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # read -r var val 00:07:49.405 06:33:53 -- accel/accel.sh@20 -- # val= 00:07:49.405 06:33:53 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # IFS=: 00:07:49.405 06:33:53 -- accel/accel.sh@19 -- # read -r var val 00:07:50.812 06:33:55 -- accel/accel.sh@20 -- # val= 00:07:50.812 06:33:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.812 06:33:55 -- accel/accel.sh@19 -- # IFS=: 00:07:50.812 06:33:55 -- accel/accel.sh@19 -- # read -r var val 00:07:50.812 06:33:55 -- accel/accel.sh@20 -- # val= 00:07:50.812 06:33:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.812 06:33:55 -- accel/accel.sh@19 -- # IFS=: 00:07:50.812 06:33:55 -- accel/accel.sh@19 -- # read -r var val 00:07:50.812 06:33:55 -- accel/accel.sh@20 -- # val= 00:07:50.812 06:33:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.812 06:33:55 -- accel/accel.sh@19 -- # IFS=: 00:07:50.812 06:33:55 -- accel/accel.sh@19 -- # read -r var val 00:07:50.812 06:33:55 -- accel/accel.sh@20 -- # val= 00:07:50.812 06:33:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.812 06:33:55 -- accel/accel.sh@19 -- # IFS=: 00:07:50.812 06:33:55 -- accel/accel.sh@19 -- # read -r var val 00:07:50.812 06:33:55 -- accel/accel.sh@20 -- # val= 00:07:50.812 06:33:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.812 06:33:55 -- accel/accel.sh@19 -- # IFS=: 00:07:50.812 06:33:55 -- accel/accel.sh@19 -- # read -r var val 00:07:50.812 06:33:55 -- accel/accel.sh@20 -- # val= 00:07:50.812 06:33:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.812 06:33:55 -- accel/accel.sh@19 -- # IFS=: 00:07:50.812 06:33:55 -- accel/accel.sh@19 -- # read -r var val 00:07:50.812 06:33:55 -- accel/accel.sh@20 -- # val= 00:07:50.812 06:33:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.812 06:33:55 -- accel/accel.sh@19 -- # IFS=: 00:07:50.812 06:33:55 -- accel/accel.sh@19 -- # read -r var val 00:07:50.812 06:33:55 -- accel/accel.sh@20 -- # val= 00:07:50.812 06:33:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.812 06:33:55 -- accel/accel.sh@19 -- # IFS=: 00:07:50.812 06:33:55 -- accel/accel.sh@19 -- # read -r var val 00:07:50.812 06:33:55 -- accel/accel.sh@20 -- # val= 00:07:50.812 06:33:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.812 06:33:55 -- accel/accel.sh@19 -- # IFS=: 00:07:50.812 06:33:55 -- accel/accel.sh@19 -- # read -r var val 00:07:50.812 06:33:55 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:50.812 06:33:55 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:50.812 06:33:55 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.812 00:07:50.812 real 0m1.425s 00:07:50.812 user 0m4.744s 00:07:50.812 sys 0m0.157s 00:07:50.812 06:33:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:50.813 06:33:55 -- common/autotest_common.sh@10 -- # set +x 00:07:50.813 ************************************ 00:07:50.813 END TEST accel_decomp_full_mcore 00:07:50.813 ************************************ 00:07:50.813 06:33:55 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:50.813 06:33:55 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:50.813 06:33:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.813 06:33:55 -- common/autotest_common.sh@10 -- # set +x 00:07:50.813 ************************************ 00:07:50.813 START TEST accel_decomp_mthread 00:07:50.813 ************************************ 00:07:50.813 06:33:55 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:50.813 06:33:55 -- accel/accel.sh@16 -- # local accel_opc 00:07:50.813 06:33:55 -- accel/accel.sh@17 -- # local accel_module 00:07:50.813 06:33:55 -- accel/accel.sh@19 -- # IFS=: 00:07:50.813 06:33:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:50.813 06:33:55 -- accel/accel.sh@19 -- # read -r var val 00:07:50.813 06:33:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:50.813 06:33:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.813 06:33:55 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.813 06:33:55 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.813 06:33:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.813 06:33:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.813 06:33:55 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.813 06:33:55 -- accel/accel.sh@40 -- # local IFS=, 00:07:50.813 06:33:55 -- accel/accel.sh@41 -- # jq -r . 00:07:50.813 [2024-04-17 06:33:55.297144] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:50.813 [2024-04-17 06:33:55.297234] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4075237 ] 00:07:50.813 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.813 [2024-04-17 06:33:55.359490] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.071 [2024-04-17 06:33:55.450129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.071 [2024-04-17 06:33:55.450803] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:51.071 06:33:55 -- accel/accel.sh@20 -- # val= 00:07:51.071 06:33:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # IFS=: 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # read -r var val 00:07:51.071 06:33:55 -- accel/accel.sh@20 -- # val= 00:07:51.071 06:33:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # IFS=: 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # read -r var val 00:07:51.071 06:33:55 -- accel/accel.sh@20 -- # val= 00:07:51.071 06:33:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # IFS=: 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # read -r var val 00:07:51.071 06:33:55 -- accel/accel.sh@20 -- # val=0x1 00:07:51.071 06:33:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # IFS=: 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # read -r var val 00:07:51.071 06:33:55 -- accel/accel.sh@20 -- # val= 00:07:51.071 06:33:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # IFS=: 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # read -r var val 00:07:51.071 06:33:55 -- accel/accel.sh@20 -- # val= 00:07:51.071 06:33:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # IFS=: 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # read -r var val 00:07:51.071 06:33:55 -- accel/accel.sh@20 -- # val=decompress 00:07:51.071 06:33:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.071 06:33:55 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # IFS=: 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # read -r var val 00:07:51.071 06:33:55 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:51.071 06:33:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # IFS=: 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # read -r var val 00:07:51.071 06:33:55 -- accel/accel.sh@20 -- # val= 00:07:51.071 06:33:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # IFS=: 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # read -r var val 00:07:51.071 06:33:55 -- accel/accel.sh@20 -- # val=software 00:07:51.071 06:33:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.071 06:33:55 -- accel/accel.sh@22 -- # accel_module=software 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # IFS=: 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # read -r var val 00:07:51.071 06:33:55 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:51.071 06:33:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # IFS=: 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # read -r var val 00:07:51.071 06:33:55 -- accel/accel.sh@20 -- # val=32 00:07:51.071 06:33:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # IFS=: 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # read -r var val 00:07:51.071 06:33:55 -- accel/accel.sh@20 -- # val=32 00:07:51.071 06:33:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # IFS=: 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # read -r var val 00:07:51.071 06:33:55 -- accel/accel.sh@20 -- # val=2 00:07:51.071 06:33:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # IFS=: 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # read -r var val 00:07:51.071 06:33:55 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:51.071 06:33:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # IFS=: 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # read -r var val 00:07:51.071 06:33:55 -- accel/accel.sh@20 -- # val=Yes 00:07:51.071 06:33:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # IFS=: 00:07:51.071 06:33:55 -- accel/accel.sh@19 -- # read -r var val 00:07:51.071 06:33:55 -- accel/accel.sh@20 -- # val= 00:07:51.072 06:33:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.072 06:33:55 -- accel/accel.sh@19 -- # IFS=: 00:07:51.072 06:33:55 -- accel/accel.sh@19 -- # read -r var val 00:07:51.072 06:33:55 -- accel/accel.sh@20 -- # val= 00:07:51.072 06:33:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.072 06:33:55 -- accel/accel.sh@19 -- # IFS=: 00:07:51.072 06:33:55 -- accel/accel.sh@19 -- # read -r var val 00:07:52.445 06:33:56 -- accel/accel.sh@20 -- # val= 00:07:52.445 06:33:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.445 06:33:56 -- accel/accel.sh@19 -- # IFS=: 00:07:52.445 06:33:56 -- accel/accel.sh@19 -- # read -r var val 00:07:52.445 06:33:56 -- accel/accel.sh@20 -- # val= 00:07:52.445 06:33:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.445 06:33:56 -- accel/accel.sh@19 -- # IFS=: 00:07:52.445 06:33:56 -- accel/accel.sh@19 -- # read -r var val 00:07:52.445 06:33:56 -- accel/accel.sh@20 -- # val= 00:07:52.445 06:33:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.445 06:33:56 -- accel/accel.sh@19 -- # IFS=: 00:07:52.445 06:33:56 -- accel/accel.sh@19 -- # read -r var val 00:07:52.445 06:33:56 -- accel/accel.sh@20 -- # val= 00:07:52.445 06:33:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.445 06:33:56 -- accel/accel.sh@19 -- # IFS=: 00:07:52.445 06:33:56 -- accel/accel.sh@19 -- # read -r var val 00:07:52.445 06:33:56 -- accel/accel.sh@20 -- # val= 00:07:52.445 06:33:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.445 06:33:56 -- accel/accel.sh@19 -- # IFS=: 00:07:52.445 06:33:56 -- accel/accel.sh@19 -- # read -r var val 00:07:52.445 06:33:56 -- accel/accel.sh@20 -- # val= 00:07:52.445 06:33:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.445 06:33:56 -- accel/accel.sh@19 -- # IFS=: 00:07:52.445 06:33:56 -- accel/accel.sh@19 -- # read -r var val 00:07:52.445 06:33:56 -- accel/accel.sh@20 -- # val= 00:07:52.445 06:33:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.445 06:33:56 -- accel/accel.sh@19 -- # IFS=: 00:07:52.445 06:33:56 -- accel/accel.sh@19 -- # read -r var val 00:07:52.445 06:33:56 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:52.445 06:33:56 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:52.445 06:33:56 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.445 00:07:52.445 real 0m1.416s 00:07:52.445 user 0m1.259s 00:07:52.445 sys 0m0.159s 00:07:52.445 06:33:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:52.445 06:33:56 -- common/autotest_common.sh@10 -- # set +x 00:07:52.445 ************************************ 00:07:52.445 END TEST accel_decomp_mthread 00:07:52.445 ************************************ 00:07:52.446 06:33:56 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.446 06:33:56 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:52.446 06:33:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:52.446 06:33:56 -- common/autotest_common.sh@10 -- # set +x 00:07:52.446 ************************************ 00:07:52.446 START TEST accel_deomp_full_mthread 00:07:52.446 ************************************ 00:07:52.446 06:33:56 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.446 06:33:56 -- accel/accel.sh@16 -- # local accel_opc 00:07:52.446 06:33:56 -- accel/accel.sh@17 -- # local accel_module 00:07:52.446 06:33:56 -- accel/accel.sh@19 -- # IFS=: 00:07:52.446 06:33:56 -- accel/accel.sh@19 -- # read -r var val 00:07:52.446 06:33:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.446 06:33:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.446 06:33:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.446 06:33:56 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.446 06:33:56 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.446 06:33:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.446 06:33:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.446 06:33:56 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.446 06:33:56 -- accel/accel.sh@40 -- # local IFS=, 00:07:52.446 06:33:56 -- accel/accel.sh@41 -- # jq -r . 00:07:52.446 [2024-04-17 06:33:56.833657] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:52.446 [2024-04-17 06:33:56.833717] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4075404 ] 00:07:52.446 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.446 [2024-04-17 06:33:56.893242] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.446 [2024-04-17 06:33:56.981731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.446 [2024-04-17 06:33:56.982397] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:52.446 06:33:57 -- accel/accel.sh@20 -- # val= 00:07:52.446 06:33:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.446 06:33:57 -- accel/accel.sh@19 -- # IFS=: 00:07:52.446 06:33:57 -- accel/accel.sh@19 -- # read -r var val 00:07:52.446 06:33:57 -- accel/accel.sh@20 -- # val= 00:07:52.446 06:33:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.446 06:33:57 -- accel/accel.sh@19 -- # IFS=: 00:07:52.446 06:33:57 -- accel/accel.sh@19 -- # read -r var val 00:07:52.446 06:33:57 -- accel/accel.sh@20 -- # val= 00:07:52.446 06:33:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.446 06:33:57 -- accel/accel.sh@19 -- # IFS=: 00:07:52.446 06:33:57 -- accel/accel.sh@19 -- # read -r var val 00:07:52.446 06:33:57 -- accel/accel.sh@20 -- # val=0x1 00:07:52.446 06:33:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.446 06:33:57 -- accel/accel.sh@19 -- # IFS=: 00:07:52.446 06:33:57 -- accel/accel.sh@19 -- # read -r var val 00:07:52.446 06:33:57 -- accel/accel.sh@20 -- # val= 00:07:52.446 06:33:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.446 06:33:57 -- accel/accel.sh@19 -- # IFS=: 00:07:52.446 06:33:57 -- accel/accel.sh@19 -- # read -r var val 00:07:52.446 06:33:57 -- accel/accel.sh@20 -- # val= 00:07:52.446 06:33:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.446 06:33:57 -- accel/accel.sh@19 -- # IFS=: 00:07:52.446 06:33:57 -- accel/accel.sh@19 -- # read -r var val 00:07:52.446 06:33:57 -- accel/accel.sh@20 -- # val=decompress 00:07:52.446 06:33:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.446 06:33:57 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:52.446 06:33:57 -- accel/accel.sh@19 -- # IFS=: 00:07:52.446 06:33:57 -- accel/accel.sh@19 -- # read -r var val 00:07:52.446 06:33:57 -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:52.446 06:33:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.446 06:33:57 -- accel/accel.sh@19 -- # IFS=: 00:07:52.446 06:33:57 -- accel/accel.sh@19 -- # read -r var val 00:07:52.446 06:33:57 -- accel/accel.sh@20 -- # val= 00:07:52.446 06:33:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.446 06:33:57 -- accel/accel.sh@19 -- # IFS=: 00:07:52.446 06:33:57 -- accel/accel.sh@19 -- # read -r var val 00:07:52.446 06:33:57 -- accel/accel.sh@20 -- # val=software 00:07:52.446 06:33:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.446 06:33:57 -- accel/accel.sh@22 -- # accel_module=software 00:07:52.446 06:33:57 -- accel/accel.sh@19 -- # IFS=: 00:07:52.446 06:33:57 -- accel/accel.sh@19 -- # read -r var val 00:07:52.446 06:33:57 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:52.446 06:33:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.446 06:33:57 -- accel/accel.sh@19 -- # IFS=: 00:07:52.446 06:33:57 -- accel/accel.sh@19 -- # read -r var val 00:07:52.704 06:33:57 -- accel/accel.sh@20 -- # val=32 00:07:52.704 06:33:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.704 06:33:57 -- accel/accel.sh@19 -- # IFS=: 00:07:52.704 06:33:57 -- accel/accel.sh@19 -- # read -r var val 00:07:52.704 06:33:57 -- accel/accel.sh@20 -- # val=32 00:07:52.704 06:33:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.704 06:33:57 -- accel/accel.sh@19 -- # IFS=: 00:07:52.704 06:33:57 -- accel/accel.sh@19 -- # read -r var val 00:07:52.704 06:33:57 -- accel/accel.sh@20 -- # val=2 00:07:52.704 06:33:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.704 06:33:57 -- accel/accel.sh@19 -- # IFS=: 00:07:52.704 06:33:57 -- accel/accel.sh@19 -- # read -r var val 00:07:52.704 06:33:57 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:52.704 06:33:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.704 06:33:57 -- accel/accel.sh@19 -- # IFS=: 00:07:52.704 06:33:57 -- accel/accel.sh@19 -- # read -r var val 00:07:52.704 06:33:57 -- accel/accel.sh@20 -- # val=Yes 00:07:52.704 06:33:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.704 06:33:57 -- accel/accel.sh@19 -- # IFS=: 00:07:52.704 06:33:57 -- accel/accel.sh@19 -- # read -r var val 00:07:52.704 06:33:57 -- accel/accel.sh@20 -- # val= 00:07:52.704 06:33:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.704 06:33:57 -- accel/accel.sh@19 -- # IFS=: 00:07:52.704 06:33:57 -- accel/accel.sh@19 -- # read -r var val 00:07:52.704 06:33:57 -- accel/accel.sh@20 -- # val= 00:07:52.704 06:33:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.704 06:33:57 -- accel/accel.sh@19 -- # IFS=: 00:07:52.704 06:33:57 -- accel/accel.sh@19 -- # read -r var val 00:07:54.078 06:33:58 -- accel/accel.sh@20 -- # val= 00:07:54.078 06:33:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.078 06:33:58 -- accel/accel.sh@19 -- # IFS=: 00:07:54.078 06:33:58 -- accel/accel.sh@19 -- # read -r var val 00:07:54.078 06:33:58 -- accel/accel.sh@20 -- # val= 00:07:54.078 06:33:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.078 06:33:58 -- accel/accel.sh@19 -- # IFS=: 00:07:54.078 06:33:58 -- accel/accel.sh@19 -- # read -r var val 00:07:54.078 06:33:58 -- accel/accel.sh@20 -- # val= 00:07:54.078 06:33:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.078 06:33:58 -- accel/accel.sh@19 -- # IFS=: 00:07:54.078 06:33:58 -- accel/accel.sh@19 -- # read -r var val 00:07:54.078 06:33:58 -- accel/accel.sh@20 -- # val= 00:07:54.078 06:33:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.078 06:33:58 -- accel/accel.sh@19 -- # IFS=: 00:07:54.078 06:33:58 -- accel/accel.sh@19 -- # read -r var val 00:07:54.078 06:33:58 -- accel/accel.sh@20 -- # val= 00:07:54.078 06:33:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.078 06:33:58 -- accel/accel.sh@19 -- # IFS=: 00:07:54.078 06:33:58 -- accel/accel.sh@19 -- # read -r var val 00:07:54.078 06:33:58 -- accel/accel.sh@20 -- # val= 00:07:54.078 06:33:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.078 06:33:58 -- accel/accel.sh@19 -- # IFS=: 00:07:54.078 06:33:58 -- accel/accel.sh@19 -- # read -r var val 00:07:54.078 06:33:58 -- accel/accel.sh@20 -- # val= 00:07:54.078 06:33:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.078 06:33:58 -- accel/accel.sh@19 -- # IFS=: 00:07:54.078 06:33:58 -- accel/accel.sh@19 -- # read -r var val 00:07:54.078 06:33:58 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:54.078 06:33:58 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:54.078 06:33:58 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.078 00:07:54.078 real 0m1.436s 00:07:54.078 user 0m1.295s 00:07:54.078 sys 0m0.144s 00:07:54.078 06:33:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:54.078 06:33:58 -- common/autotest_common.sh@10 -- # set +x 00:07:54.078 ************************************ 00:07:54.078 END TEST accel_deomp_full_mthread 00:07:54.078 ************************************ 00:07:54.078 06:33:58 -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:54.078 06:33:58 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:54.078 06:33:58 -- accel/accel.sh@137 -- # build_accel_config 00:07:54.078 06:33:58 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:54.078 06:33:58 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:54.078 06:33:58 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:54.078 06:33:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.078 06:33:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.078 06:33:58 -- common/autotest_common.sh@10 -- # set +x 00:07:54.078 06:33:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.078 06:33:58 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:54.078 06:33:58 -- accel/accel.sh@40 -- # local IFS=, 00:07:54.078 06:33:58 -- accel/accel.sh@41 -- # jq -r . 00:07:54.078 ************************************ 00:07:54.078 START TEST accel_dif_functional_tests 00:07:54.078 ************************************ 00:07:54.078 06:33:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:54.078 [2024-04-17 06:33:58.408573] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:54.078 [2024-04-17 06:33:58.408646] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4075607 ] 00:07:54.078 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.078 [2024-04-17 06:33:58.469265] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:54.078 [2024-04-17 06:33:58.562995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.078 [2024-04-17 06:33:58.563061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.078 [2024-04-17 06:33:58.563063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.078 [2024-04-17 06:33:58.563769] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:07:54.078 00:07:54.078 00:07:54.078 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.078 http://cunit.sourceforge.net/ 00:07:54.078 00:07:54.078 00:07:54.078 Suite: accel_dif 00:07:54.078 Test: verify: DIF generated, GUARD check ...passed 00:07:54.078 Test: verify: DIF generated, APPTAG check ...passed 00:07:54.078 Test: verify: DIF generated, REFTAG check ...passed 00:07:54.078 Test: verify: DIF not generated, GUARD check ...[2024-04-17 06:33:58.656384] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:54.078 [2024-04-17 06:33:58.656442] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:54.078 passed 00:07:54.078 Test: verify: DIF not generated, APPTAG check ...[2024-04-17 06:33:58.656477] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:54.078 [2024-04-17 06:33:58.656503] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:54.078 passed 00:07:54.078 Test: verify: DIF not generated, REFTAG check ...[2024-04-17 06:33:58.656531] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:54.078 [2024-04-17 06:33:58.656556] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:54.078 passed 00:07:54.078 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:54.078 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-17 06:33:58.656616] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:54.078 passed 00:07:54.078 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:54.078 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:54.078 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:54.078 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-17 06:33:58.656744] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:54.078 passed 00:07:54.078 Test: generate copy: DIF generated, GUARD check ...passed 00:07:54.078 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:54.078 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:54.078 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:54.078 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:54.078 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:54.078 Test: generate copy: iovecs-len validate ...[2024-04-17 06:33:58.656959] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:54.078 passed 00:07:54.078 Test: generate copy: buffer alignment validate ...passed 00:07:54.078 00:07:54.078 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.078 suites 1 1 n/a 0 0 00:07:54.078 tests 20 20 20 0 0 00:07:54.079 asserts 204 204 204 0 n/a 00:07:54.079 00:07:54.079 Elapsed time = 0.000 seconds 00:07:54.337 00:07:54.337 real 0m0.492s 00:07:54.337 user 0m0.721s 00:07:54.337 sys 0m0.177s 00:07:54.337 06:33:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:54.337 06:33:58 -- common/autotest_common.sh@10 -- # set +x 00:07:54.337 ************************************ 00:07:54.337 END TEST accel_dif_functional_tests 00:07:54.337 ************************************ 00:07:54.337 00:07:54.337 real 0m33.617s 00:07:54.337 user 0m35.751s 00:07:54.337 sys 0m5.485s 00:07:54.337 06:33:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:54.337 06:33:58 -- common/autotest_common.sh@10 -- # set +x 00:07:54.337 ************************************ 00:07:54.337 END TEST accel 00:07:54.337 ************************************ 00:07:54.337 06:33:58 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:54.337 06:33:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:54.337 06:33:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.337 06:33:58 -- common/autotest_common.sh@10 -- # set +x 00:07:54.595 ************************************ 00:07:54.595 START TEST accel_rpc 00:07:54.595 ************************************ 00:07:54.595 06:33:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:54.595 * Looking for test storage... 00:07:54.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:54.595 06:33:59 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:54.595 06:33:59 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=4075765 00:07:54.595 06:33:59 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:54.595 06:33:59 -- accel/accel_rpc.sh@15 -- # waitforlisten 4075765 00:07:54.595 06:33:59 -- common/autotest_common.sh@817 -- # '[' -z 4075765 ']' 00:07:54.595 06:33:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.595 06:33:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:54.595 06:33:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.595 06:33:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:54.595 06:33:59 -- common/autotest_common.sh@10 -- # set +x 00:07:54.595 [2024-04-17 06:33:59.101901] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:54.595 [2024-04-17 06:33:59.101982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4075765 ] 00:07:54.595 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.595 [2024-04-17 06:33:59.163229] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.854 [2024-04-17 06:33:59.249697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.854 06:33:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:54.854 06:33:59 -- common/autotest_common.sh@850 -- # return 0 00:07:54.854 06:33:59 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:54.854 06:33:59 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:54.854 06:33:59 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:54.854 06:33:59 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:54.854 06:33:59 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:54.854 06:33:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:54.854 06:33:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.854 06:33:59 -- common/autotest_common.sh@10 -- # set +x 00:07:54.854 ************************************ 00:07:54.854 START TEST accel_assign_opcode 00:07:54.854 ************************************ 00:07:54.854 06:33:59 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:07:54.854 06:33:59 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:54.854 06:33:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.854 06:33:59 -- common/autotest_common.sh@10 -- # set +x 00:07:54.854 [2024-04-17 06:33:59.398571] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:54.854 06:33:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.854 06:33:59 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:54.854 06:33:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.854 06:33:59 -- common/autotest_common.sh@10 -- # set +x 00:07:54.854 [2024-04-17 06:33:59.406561] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:54.854 06:33:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.854 06:33:59 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:54.854 06:33:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.854 06:33:59 -- common/autotest_common.sh@10 -- # set +x 00:07:55.112 06:33:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.112 06:33:59 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:55.112 06:33:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.112 06:33:59 -- common/autotest_common.sh@10 -- # set +x 00:07:55.112 06:33:59 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:55.112 06:33:59 -- accel/accel_rpc.sh@42 -- # grep software 00:07:55.112 06:33:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.112 software 00:07:55.112 00:07:55.112 real 0m0.294s 00:07:55.112 user 0m0.040s 00:07:55.112 sys 0m0.006s 00:07:55.112 06:33:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:55.112 06:33:59 -- common/autotest_common.sh@10 -- # set +x 00:07:55.112 ************************************ 00:07:55.112 END TEST accel_assign_opcode 00:07:55.112 ************************************ 00:07:55.112 06:33:59 -- accel/accel_rpc.sh@55 -- # killprocess 4075765 00:07:55.112 06:33:59 -- common/autotest_common.sh@936 -- # '[' -z 4075765 ']' 00:07:55.112 06:33:59 -- common/autotest_common.sh@940 -- # kill -0 4075765 00:07:55.112 06:33:59 -- common/autotest_common.sh@941 -- # uname 00:07:55.112 06:33:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:55.112 06:33:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4075765 00:07:55.371 06:33:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:55.371 06:33:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:55.371 06:33:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4075765' 00:07:55.371 killing process with pid 4075765 00:07:55.371 06:33:59 -- common/autotest_common.sh@955 -- # kill 4075765 00:07:55.371 06:33:59 -- common/autotest_common.sh@960 -- # wait 4075765 00:07:55.629 00:07:55.629 real 0m1.134s 00:07:55.629 user 0m1.081s 00:07:55.629 sys 0m0.457s 00:07:55.629 06:34:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:55.629 06:34:00 -- common/autotest_common.sh@10 -- # set +x 00:07:55.629 ************************************ 00:07:55.629 END TEST accel_rpc 00:07:55.629 ************************************ 00:07:55.629 06:34:00 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:55.629 06:34:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:55.629 06:34:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:55.629 06:34:00 -- common/autotest_common.sh@10 -- # set +x 00:07:55.887 ************************************ 00:07:55.887 START TEST app_cmdline 00:07:55.887 ************************************ 00:07:55.887 06:34:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:55.887 * Looking for test storage... 00:07:55.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:55.888 06:34:00 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:55.888 06:34:00 -- app/cmdline.sh@17 -- # spdk_tgt_pid=4075989 00:07:55.888 06:34:00 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:55.888 06:34:00 -- app/cmdline.sh@18 -- # waitforlisten 4075989 00:07:55.888 06:34:00 -- common/autotest_common.sh@817 -- # '[' -z 4075989 ']' 00:07:55.888 06:34:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.888 06:34:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:55.888 06:34:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.888 06:34:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:55.888 06:34:00 -- common/autotest_common.sh@10 -- # set +x 00:07:55.888 [2024-04-17 06:34:00.355633] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:07:55.888 [2024-04-17 06:34:00.355721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4075989 ] 00:07:55.888 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.888 [2024-04-17 06:34:00.415740] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.146 [2024-04-17 06:34:00.503380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.404 06:34:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:56.404 06:34:00 -- common/autotest_common.sh@850 -- # return 0 00:07:56.404 06:34:00 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:56.404 { 00:07:56.404 "version": "SPDK v24.05-pre git sha1 9c9f7ddbb", 00:07:56.404 "fields": { 00:07:56.404 "major": 24, 00:07:56.404 "minor": 5, 00:07:56.404 "patch": 0, 00:07:56.404 "suffix": "-pre", 00:07:56.404 "commit": "9c9f7ddbb" 00:07:56.404 } 00:07:56.404 } 00:07:56.404 06:34:00 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:56.404 06:34:00 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:56.404 06:34:00 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:56.404 06:34:00 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:56.404 06:34:00 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:56.404 06:34:00 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:56.404 06:34:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:56.404 06:34:00 -- app/cmdline.sh@26 -- # sort 00:07:56.404 06:34:00 -- common/autotest_common.sh@10 -- # set +x 00:07:56.404 06:34:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:56.661 06:34:01 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:56.661 06:34:01 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:56.661 06:34:01 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:56.661 06:34:01 -- common/autotest_common.sh@638 -- # local es=0 00:07:56.661 06:34:01 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:56.661 06:34:01 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.661 06:34:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:56.661 06:34:01 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.661 06:34:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:56.661 06:34:01 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.661 06:34:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:56.661 06:34:01 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.661 06:34:01 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:56.661 06:34:01 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:56.919 request: 00:07:56.919 { 00:07:56.919 "method": "env_dpdk_get_mem_stats", 00:07:56.919 "req_id": 1 00:07:56.919 } 00:07:56.919 Got JSON-RPC error response 00:07:56.919 response: 00:07:56.919 { 00:07:56.919 "code": -32601, 00:07:56.919 "message": "Method not found" 00:07:56.919 } 00:07:56.919 06:34:01 -- common/autotest_common.sh@641 -- # es=1 00:07:56.919 06:34:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:56.919 06:34:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:56.919 06:34:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:56.919 06:34:01 -- app/cmdline.sh@1 -- # killprocess 4075989 00:07:56.919 06:34:01 -- common/autotest_common.sh@936 -- # '[' -z 4075989 ']' 00:07:56.919 06:34:01 -- common/autotest_common.sh@940 -- # kill -0 4075989 00:07:56.919 06:34:01 -- common/autotest_common.sh@941 -- # uname 00:07:56.919 06:34:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:56.919 06:34:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4075989 00:07:56.919 06:34:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:56.919 06:34:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:56.919 06:34:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4075989' 00:07:56.919 killing process with pid 4075989 00:07:56.919 06:34:01 -- common/autotest_common.sh@955 -- # kill 4075989 00:07:56.919 06:34:01 -- common/autotest_common.sh@960 -- # wait 4075989 00:07:57.178 00:07:57.178 real 0m1.435s 00:07:57.178 user 0m1.760s 00:07:57.178 sys 0m0.437s 00:07:57.178 06:34:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:57.178 06:34:01 -- common/autotest_common.sh@10 -- # set +x 00:07:57.178 ************************************ 00:07:57.178 END TEST app_cmdline 00:07:57.178 ************************************ 00:07:57.178 06:34:01 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:57.178 06:34:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:57.178 06:34:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.178 06:34:01 -- common/autotest_common.sh@10 -- # set +x 00:07:57.436 ************************************ 00:07:57.436 START TEST version 00:07:57.436 ************************************ 00:07:57.436 06:34:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:57.436 * Looking for test storage... 00:07:57.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:57.436 06:34:01 -- app/version.sh@17 -- # get_header_version major 00:07:57.436 06:34:01 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:57.436 06:34:01 -- app/version.sh@14 -- # cut -f2 00:07:57.436 06:34:01 -- app/version.sh@14 -- # tr -d '"' 00:07:57.436 06:34:01 -- app/version.sh@17 -- # major=24 00:07:57.436 06:34:01 -- app/version.sh@18 -- # get_header_version minor 00:07:57.436 06:34:01 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:57.436 06:34:01 -- app/version.sh@14 -- # cut -f2 00:07:57.436 06:34:01 -- app/version.sh@14 -- # tr -d '"' 00:07:57.436 06:34:01 -- app/version.sh@18 -- # minor=5 00:07:57.436 06:34:01 -- app/version.sh@19 -- # get_header_version patch 00:07:57.436 06:34:01 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:57.436 06:34:01 -- app/version.sh@14 -- # cut -f2 00:07:57.436 06:34:01 -- app/version.sh@14 -- # tr -d '"' 00:07:57.436 06:34:01 -- app/version.sh@19 -- # patch=0 00:07:57.436 06:34:01 -- app/version.sh@20 -- # get_header_version suffix 00:07:57.436 06:34:01 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:57.436 06:34:01 -- app/version.sh@14 -- # cut -f2 00:07:57.436 06:34:01 -- app/version.sh@14 -- # tr -d '"' 00:07:57.436 06:34:01 -- app/version.sh@20 -- # suffix=-pre 00:07:57.436 06:34:01 -- app/version.sh@22 -- # version=24.5 00:07:57.436 06:34:01 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:57.436 06:34:01 -- app/version.sh@28 -- # version=24.5rc0 00:07:57.436 06:34:01 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:57.436 06:34:01 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:57.436 06:34:01 -- app/version.sh@30 -- # py_version=24.5rc0 00:07:57.436 06:34:01 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:07:57.436 00:07:57.436 real 0m0.107s 00:07:57.436 user 0m0.067s 00:07:57.436 sys 0m0.061s 00:07:57.436 06:34:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:57.436 06:34:01 -- common/autotest_common.sh@10 -- # set +x 00:07:57.436 ************************************ 00:07:57.436 END TEST version 00:07:57.436 ************************************ 00:07:57.436 06:34:01 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:07:57.436 06:34:01 -- spdk/autotest.sh@194 -- # uname -s 00:07:57.436 06:34:01 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:57.436 06:34:01 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:57.436 06:34:01 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:57.436 06:34:01 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:57.436 06:34:01 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:07:57.436 06:34:01 -- spdk/autotest.sh@258 -- # timing_exit lib 00:07:57.437 06:34:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:57.437 06:34:01 -- common/autotest_common.sh@10 -- # set +x 00:07:57.437 06:34:01 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:57.437 06:34:01 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:07:57.437 06:34:01 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:07:57.437 06:34:01 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:07:57.437 06:34:01 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:07:57.437 06:34:01 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:07:57.437 06:34:01 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:57.437 06:34:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:57.437 06:34:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.437 06:34:01 -- common/autotest_common.sh@10 -- # set +x 00:07:57.695 ************************************ 00:07:57.695 START TEST nvmf_tcp 00:07:57.695 ************************************ 00:07:57.695 06:34:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:57.695 * Looking for test storage... 00:07:57.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:57.695 06:34:02 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:57.695 06:34:02 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:57.695 06:34:02 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.695 06:34:02 -- nvmf/common.sh@7 -- # uname -s 00:07:57.695 06:34:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.695 06:34:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.695 06:34:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.695 06:34:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.695 06:34:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.695 06:34:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.695 06:34:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.695 06:34:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.695 06:34:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.695 06:34:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.695 06:34:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:57.695 06:34:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:57.695 06:34:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.695 06:34:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.695 06:34:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.695 06:34:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.695 06:34:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.695 06:34:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.695 06:34:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.695 06:34:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.695 06:34:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.695 06:34:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.695 06:34:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.695 06:34:02 -- paths/export.sh@5 -- # export PATH 00:07:57.695 06:34:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.695 06:34:02 -- nvmf/common.sh@47 -- # : 0 00:07:57.695 06:34:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:57.695 06:34:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:57.695 06:34:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.695 06:34:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.695 06:34:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.695 06:34:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:57.695 06:34:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:57.695 06:34:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:57.695 06:34:02 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:57.695 06:34:02 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:57.695 06:34:02 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:57.695 06:34:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:57.695 06:34:02 -- common/autotest_common.sh@10 -- # set +x 00:07:57.695 06:34:02 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:57.695 06:34:02 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:57.695 06:34:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:57.695 06:34:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.695 06:34:02 -- common/autotest_common.sh@10 -- # set +x 00:07:57.695 ************************************ 00:07:57.695 START TEST nvmf_example 00:07:57.695 ************************************ 00:07:57.695 06:34:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:57.954 * Looking for test storage... 00:07:57.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:57.954 06:34:02 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.954 06:34:02 -- nvmf/common.sh@7 -- # uname -s 00:07:57.954 06:34:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.954 06:34:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.954 06:34:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.954 06:34:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.954 06:34:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.954 06:34:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.954 06:34:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.954 06:34:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.954 06:34:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.954 06:34:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.954 06:34:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:57.954 06:34:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:57.954 06:34:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.954 06:34:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.954 06:34:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.954 06:34:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.954 06:34:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.954 06:34:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.954 06:34:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.954 06:34:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.954 06:34:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.954 06:34:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.954 06:34:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.954 06:34:02 -- paths/export.sh@5 -- # export PATH 00:07:57.954 06:34:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.954 06:34:02 -- nvmf/common.sh@47 -- # : 0 00:07:57.954 06:34:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:57.954 06:34:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:57.954 06:34:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.954 06:34:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.954 06:34:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.954 06:34:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:57.954 06:34:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:57.954 06:34:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:57.954 06:34:02 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:57.954 06:34:02 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:57.954 06:34:02 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:57.954 06:34:02 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:57.954 06:34:02 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:57.954 06:34:02 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:57.954 06:34:02 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:57.954 06:34:02 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:57.954 06:34:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:57.954 06:34:02 -- common/autotest_common.sh@10 -- # set +x 00:07:57.954 06:34:02 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:57.954 06:34:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:57.954 06:34:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.954 06:34:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:57.954 06:34:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:57.954 06:34:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:57.954 06:34:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.954 06:34:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:57.954 06:34:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.954 06:34:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:57.954 06:34:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:57.954 06:34:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:57.954 06:34:02 -- common/autotest_common.sh@10 -- # set +x 00:07:59.855 06:34:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:59.855 06:34:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:59.855 06:34:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:59.855 06:34:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:59.855 06:34:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:59.855 06:34:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:59.855 06:34:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:59.855 06:34:04 -- nvmf/common.sh@295 -- # net_devs=() 00:07:59.855 06:34:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:59.855 06:34:04 -- nvmf/common.sh@296 -- # e810=() 00:07:59.855 06:34:04 -- nvmf/common.sh@296 -- # local -ga e810 00:07:59.855 06:34:04 -- nvmf/common.sh@297 -- # x722=() 00:07:59.855 06:34:04 -- nvmf/common.sh@297 -- # local -ga x722 00:07:59.855 06:34:04 -- nvmf/common.sh@298 -- # mlx=() 00:07:59.855 06:34:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:59.855 06:34:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:59.855 06:34:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:59.855 06:34:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:59.855 06:34:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:59.855 06:34:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:59.855 06:34:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:59.855 06:34:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:59.855 06:34:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:59.855 06:34:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:59.856 06:34:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:59.856 06:34:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:59.856 06:34:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:59.856 06:34:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:59.856 06:34:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:59.856 06:34:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:59.856 06:34:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:59.856 06:34:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:59.856 06:34:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:59.856 06:34:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:59.856 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:59.856 06:34:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:59.856 06:34:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:59.856 06:34:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.856 06:34:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.856 06:34:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:59.856 06:34:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:59.856 06:34:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:59.856 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:59.856 06:34:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:59.856 06:34:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:59.856 06:34:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.856 06:34:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.856 06:34:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:59.856 06:34:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:59.856 06:34:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:59.856 06:34:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:59.856 06:34:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:59.856 06:34:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.856 06:34:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:59.856 06:34:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.856 06:34:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:59.856 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:59.856 06:34:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.856 06:34:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:59.856 06:34:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.856 06:34:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:59.856 06:34:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.856 06:34:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:59.856 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:59.856 06:34:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.856 06:34:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:59.856 06:34:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:59.856 06:34:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:59.856 06:34:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:59.856 06:34:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:59.856 06:34:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.856 06:34:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:59.856 06:34:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:59.856 06:34:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:59.856 06:34:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:59.856 06:34:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:59.856 06:34:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:59.856 06:34:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:59.856 06:34:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.856 06:34:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:59.856 06:34:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:59.856 06:34:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:59.856 06:34:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:00.113 06:34:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:00.113 06:34:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:00.113 06:34:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:00.113 06:34:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:00.113 06:34:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:00.114 06:34:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:00.114 06:34:04 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:00.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:00.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:08:00.114 00:08:00.114 --- 10.0.0.2 ping statistics --- 00:08:00.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.114 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:08:00.114 06:34:04 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:00.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:00.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:08:00.114 00:08:00.114 --- 10.0.0.1 ping statistics --- 00:08:00.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.114 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:08:00.114 06:34:04 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:00.114 06:34:04 -- nvmf/common.sh@411 -- # return 0 00:08:00.114 06:34:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:00.114 06:34:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:00.114 06:34:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:00.114 06:34:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:00.114 06:34:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:00.114 06:34:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:00.114 06:34:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:00.114 06:34:04 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:00.114 06:34:04 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:00.114 06:34:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:00.114 06:34:04 -- common/autotest_common.sh@10 -- # set +x 00:08:00.114 06:34:04 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:00.114 06:34:04 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:00.114 06:34:04 -- target/nvmf_example.sh@34 -- # nvmfpid=4078029 00:08:00.114 06:34:04 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:00.114 06:34:04 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:00.114 06:34:04 -- target/nvmf_example.sh@36 -- # waitforlisten 4078029 00:08:00.114 06:34:04 -- common/autotest_common.sh@817 -- # '[' -z 4078029 ']' 00:08:00.114 06:34:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.114 06:34:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:00.114 06:34:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.114 06:34:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:00.114 06:34:04 -- common/autotest_common.sh@10 -- # set +x 00:08:00.114 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.047 06:34:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:01.047 06:34:05 -- common/autotest_common.sh@850 -- # return 0 00:08:01.047 06:34:05 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:01.047 06:34:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:01.047 06:34:05 -- common/autotest_common.sh@10 -- # set +x 00:08:01.047 06:34:05 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:01.047 06:34:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:01.047 06:34:05 -- common/autotest_common.sh@10 -- # set +x 00:08:01.047 06:34:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:01.047 06:34:05 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:01.047 06:34:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:01.047 06:34:05 -- common/autotest_common.sh@10 -- # set +x 00:08:01.047 06:34:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:01.047 06:34:05 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:01.047 06:34:05 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:01.047 06:34:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:01.047 06:34:05 -- common/autotest_common.sh@10 -- # set +x 00:08:01.047 06:34:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:01.047 06:34:05 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:01.047 06:34:05 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:01.047 06:34:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:01.047 06:34:05 -- common/autotest_common.sh@10 -- # set +x 00:08:01.048 06:34:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:01.048 06:34:05 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:01.048 06:34:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:01.048 06:34:05 -- common/autotest_common.sh@10 -- # set +x 00:08:01.305 06:34:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:01.305 06:34:05 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:01.305 06:34:05 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:01.305 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.267 Initializing NVMe Controllers 00:08:11.267 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:11.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:11.267 Initialization complete. Launching workers. 00:08:11.267 ======================================================== 00:08:11.267 Latency(us) 00:08:11.267 Device Information : IOPS MiB/s Average min max 00:08:11.267 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15073.90 58.88 4247.31 884.14 51998.14 00:08:11.267 ======================================================== 00:08:11.267 Total : 15073.90 58.88 4247.31 884.14 51998.14 00:08:11.267 00:08:11.267 06:34:15 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:11.267 06:34:15 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:11.267 06:34:15 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:11.267 06:34:15 -- nvmf/common.sh@117 -- # sync 00:08:11.267 06:34:15 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:11.267 06:34:15 -- nvmf/common.sh@120 -- # set +e 00:08:11.267 06:34:15 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:11.267 06:34:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:11.267 rmmod nvme_tcp 00:08:11.525 rmmod nvme_fabrics 00:08:11.525 rmmod nvme_keyring 00:08:11.525 06:34:15 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:11.525 06:34:15 -- nvmf/common.sh@124 -- # set -e 00:08:11.525 06:34:15 -- nvmf/common.sh@125 -- # return 0 00:08:11.525 06:34:15 -- nvmf/common.sh@478 -- # '[' -n 4078029 ']' 00:08:11.525 06:34:15 -- nvmf/common.sh@479 -- # killprocess 4078029 00:08:11.525 06:34:15 -- common/autotest_common.sh@936 -- # '[' -z 4078029 ']' 00:08:11.525 06:34:15 -- common/autotest_common.sh@940 -- # kill -0 4078029 00:08:11.525 06:34:15 -- common/autotest_common.sh@941 -- # uname 00:08:11.525 06:34:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:11.525 06:34:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4078029 00:08:11.525 06:34:15 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:08:11.525 06:34:15 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:08:11.525 06:34:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4078029' 00:08:11.525 killing process with pid 4078029 00:08:11.525 06:34:15 -- common/autotest_common.sh@955 -- # kill 4078029 00:08:11.525 06:34:15 -- common/autotest_common.sh@960 -- # wait 4078029 00:08:11.784 nvmf threads initialize successfully 00:08:11.784 bdev subsystem init successfully 00:08:11.784 created a nvmf target service 00:08:11.784 create targets's poll groups done 00:08:11.784 all subsystems of target started 00:08:11.784 nvmf target is running 00:08:11.784 all subsystems of target stopped 00:08:11.784 destroy targets's poll groups done 00:08:11.784 destroyed the nvmf target service 00:08:11.784 bdev subsystem finish successfully 00:08:11.784 nvmf threads destroy successfully 00:08:11.784 06:34:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:11.784 06:34:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:11.784 06:34:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:11.784 06:34:16 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:11.784 06:34:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:11.784 06:34:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.784 06:34:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.784 06:34:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.719 06:34:18 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:13.719 06:34:18 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:13.719 06:34:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:13.719 06:34:18 -- common/autotest_common.sh@10 -- # set +x 00:08:13.719 00:08:13.719 real 0m15.967s 00:08:13.719 user 0m45.167s 00:08:13.719 sys 0m3.302s 00:08:13.719 06:34:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:13.719 06:34:18 -- common/autotest_common.sh@10 -- # set +x 00:08:13.719 ************************************ 00:08:13.719 END TEST nvmf_example 00:08:13.719 ************************************ 00:08:13.719 06:34:18 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:13.719 06:34:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:13.719 06:34:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:13.719 06:34:18 -- common/autotest_common.sh@10 -- # set +x 00:08:13.979 ************************************ 00:08:13.979 START TEST nvmf_filesystem 00:08:13.979 ************************************ 00:08:13.979 06:34:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:13.979 * Looking for test storage... 00:08:13.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:13.979 06:34:18 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:13.979 06:34:18 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:13.979 06:34:18 -- common/autotest_common.sh@34 -- # set -e 00:08:13.979 06:34:18 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:13.979 06:34:18 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:13.979 06:34:18 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:13.979 06:34:18 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:13.979 06:34:18 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:13.979 06:34:18 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:13.979 06:34:18 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:13.979 06:34:18 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:13.979 06:34:18 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:13.979 06:34:18 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:13.979 06:34:18 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:13.979 06:34:18 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:13.979 06:34:18 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:13.979 06:34:18 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:13.979 06:34:18 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:13.979 06:34:18 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:13.979 06:34:18 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:13.979 06:34:18 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:13.979 06:34:18 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:13.979 06:34:18 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:13.979 06:34:18 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:13.979 06:34:18 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:13.979 06:34:18 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:13.979 06:34:18 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:13.979 06:34:18 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:13.979 06:34:18 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:13.979 06:34:18 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:13.979 06:34:18 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:13.979 06:34:18 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:13.979 06:34:18 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:13.979 06:34:18 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:13.979 06:34:18 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:13.979 06:34:18 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:13.979 06:34:18 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:13.979 06:34:18 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:13.979 06:34:18 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:13.979 06:34:18 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:13.979 06:34:18 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:13.979 06:34:18 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:13.979 06:34:18 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:13.979 06:34:18 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:13.979 06:34:18 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:13.979 06:34:18 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:13.979 06:34:18 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:13.979 06:34:18 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:13.979 06:34:18 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:13.979 06:34:18 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:13.979 06:34:18 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:13.979 06:34:18 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:13.979 06:34:18 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:13.979 06:34:18 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:13.979 06:34:18 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:13.979 06:34:18 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:13.979 06:34:18 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:13.979 06:34:18 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:13.979 06:34:18 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:08:13.979 06:34:18 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:13.979 06:34:18 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:08:13.979 06:34:18 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:08:13.979 06:34:18 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:08:13.979 06:34:18 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:08:13.979 06:34:18 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:08:13.979 06:34:18 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:08:13.979 06:34:18 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:08:13.980 06:34:18 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:08:13.980 06:34:18 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:08:13.980 06:34:18 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:13.980 06:34:18 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:08:13.980 06:34:18 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:08:13.980 06:34:18 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:08:13.980 06:34:18 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:08:13.980 06:34:18 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:08:13.980 06:34:18 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:13.980 06:34:18 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:08:13.980 06:34:18 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:08:13.980 06:34:18 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:08:13.980 06:34:18 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:08:13.980 06:34:18 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:08:13.980 06:34:18 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:08:13.980 06:34:18 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:08:13.980 06:34:18 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:08:13.980 06:34:18 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:08:13.980 06:34:18 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:08:13.980 06:34:18 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:08:13.980 06:34:18 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:13.980 06:34:18 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:08:13.980 06:34:18 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:08:13.980 06:34:18 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:13.980 06:34:18 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:13.980 06:34:18 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:13.980 06:34:18 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:13.980 06:34:18 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:13.980 06:34:18 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:13.980 06:34:18 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:13.980 06:34:18 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:13.980 06:34:18 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:13.980 06:34:18 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:13.980 06:34:18 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:13.980 06:34:18 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:13.980 06:34:18 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:13.980 06:34:18 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:13.980 06:34:18 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:13.980 06:34:18 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:13.980 #define SPDK_CONFIG_H 00:08:13.980 #define SPDK_CONFIG_APPS 1 00:08:13.980 #define SPDK_CONFIG_ARCH native 00:08:13.980 #undef SPDK_CONFIG_ASAN 00:08:13.980 #undef SPDK_CONFIG_AVAHI 00:08:13.980 #undef SPDK_CONFIG_CET 00:08:13.980 #define SPDK_CONFIG_COVERAGE 1 00:08:13.980 #define SPDK_CONFIG_CROSS_PREFIX 00:08:13.980 #undef SPDK_CONFIG_CRYPTO 00:08:13.980 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:13.980 #undef SPDK_CONFIG_CUSTOMOCF 00:08:13.980 #undef SPDK_CONFIG_DAOS 00:08:13.980 #define SPDK_CONFIG_DAOS_DIR 00:08:13.980 #define SPDK_CONFIG_DEBUG 1 00:08:13.980 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:13.980 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:13.980 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:13.980 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:13.980 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:13.980 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:13.980 #define SPDK_CONFIG_EXAMPLES 1 00:08:13.980 #undef SPDK_CONFIG_FC 00:08:13.980 #define SPDK_CONFIG_FC_PATH 00:08:13.980 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:13.980 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:13.980 #undef SPDK_CONFIG_FUSE 00:08:13.980 #undef SPDK_CONFIG_FUZZER 00:08:13.980 #define SPDK_CONFIG_FUZZER_LIB 00:08:13.980 #undef SPDK_CONFIG_GOLANG 00:08:13.980 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:13.980 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:13.980 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:13.980 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:08:13.980 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:13.980 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:13.980 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:13.980 #define SPDK_CONFIG_IDXD 1 00:08:13.980 #undef SPDK_CONFIG_IDXD_KERNEL 00:08:13.980 #undef SPDK_CONFIG_IPSEC_MB 00:08:13.980 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:13.980 #define SPDK_CONFIG_ISAL 1 00:08:13.980 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:13.980 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:13.980 #define SPDK_CONFIG_LIBDIR 00:08:13.980 #undef SPDK_CONFIG_LTO 00:08:13.980 #define SPDK_CONFIG_MAX_LCORES 00:08:13.980 #define SPDK_CONFIG_NVME_CUSE 1 00:08:13.980 #undef SPDK_CONFIG_OCF 00:08:13.980 #define SPDK_CONFIG_OCF_PATH 00:08:13.980 #define SPDK_CONFIG_OPENSSL_PATH 00:08:13.980 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:13.980 #define SPDK_CONFIG_PGO_DIR 00:08:13.980 #undef SPDK_CONFIG_PGO_USE 00:08:13.980 #define SPDK_CONFIG_PREFIX /usr/local 00:08:13.980 #undef SPDK_CONFIG_RAID5F 00:08:13.980 #undef SPDK_CONFIG_RBD 00:08:13.980 #define SPDK_CONFIG_RDMA 1 00:08:13.980 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:13.980 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:13.980 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:13.980 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:13.980 #define SPDK_CONFIG_SHARED 1 00:08:13.980 #undef SPDK_CONFIG_SMA 00:08:13.980 #define SPDK_CONFIG_TESTS 1 00:08:13.980 #undef SPDK_CONFIG_TSAN 00:08:13.980 #define SPDK_CONFIG_UBLK 1 00:08:13.980 #define SPDK_CONFIG_UBSAN 1 00:08:13.980 #undef SPDK_CONFIG_UNIT_TESTS 00:08:13.980 #undef SPDK_CONFIG_URING 00:08:13.980 #define SPDK_CONFIG_URING_PATH 00:08:13.980 #undef SPDK_CONFIG_URING_ZNS 00:08:13.980 #undef SPDK_CONFIG_USDT 00:08:13.980 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:13.980 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:13.980 #define SPDK_CONFIG_VFIO_USER 1 00:08:13.980 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:13.980 #define SPDK_CONFIG_VHOST 1 00:08:13.980 #define SPDK_CONFIG_VIRTIO 1 00:08:13.980 #undef SPDK_CONFIG_VTUNE 00:08:13.980 #define SPDK_CONFIG_VTUNE_DIR 00:08:13.980 #define SPDK_CONFIG_WERROR 1 00:08:13.980 #define SPDK_CONFIG_WPDK_DIR 00:08:13.980 #undef SPDK_CONFIG_XNVME 00:08:13.980 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:13.980 06:34:18 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:13.980 06:34:18 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:13.980 06:34:18 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.980 06:34:18 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.980 06:34:18 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.980 06:34:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.980 06:34:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.980 06:34:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.980 06:34:18 -- paths/export.sh@5 -- # export PATH 00:08:13.980 06:34:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.980 06:34:18 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:13.980 06:34:18 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:13.980 06:34:18 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:13.980 06:34:18 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:13.980 06:34:18 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:13.980 06:34:18 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:13.980 06:34:18 -- pm/common@67 -- # TEST_TAG=N/A 00:08:13.980 06:34:18 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:13.980 06:34:18 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:13.980 06:34:18 -- pm/common@71 -- # uname -s 00:08:13.980 06:34:18 -- pm/common@71 -- # PM_OS=Linux 00:08:13.981 06:34:18 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:13.981 06:34:18 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:08:13.981 06:34:18 -- pm/common@76 -- # [[ Linux == Linux ]] 00:08:13.981 06:34:18 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:08:13.981 06:34:18 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:08:13.981 06:34:18 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:13.981 06:34:18 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:13.981 06:34:18 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:08:13.981 06:34:18 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:08:13.981 06:34:18 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:13.981 06:34:18 -- common/autotest_common.sh@57 -- # : 1 00:08:13.981 06:34:18 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:08:13.981 06:34:18 -- common/autotest_common.sh@61 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:13.981 06:34:18 -- common/autotest_common.sh@63 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:08:13.981 06:34:18 -- common/autotest_common.sh@65 -- # : 1 00:08:13.981 06:34:18 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:13.981 06:34:18 -- common/autotest_common.sh@67 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:08:13.981 06:34:18 -- common/autotest_common.sh@69 -- # : 00:08:13.981 06:34:18 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:08:13.981 06:34:18 -- common/autotest_common.sh@71 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:08:13.981 06:34:18 -- common/autotest_common.sh@73 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:08:13.981 06:34:18 -- common/autotest_common.sh@75 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:08:13.981 06:34:18 -- common/autotest_common.sh@77 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:13.981 06:34:18 -- common/autotest_common.sh@79 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:08:13.981 06:34:18 -- common/autotest_common.sh@81 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:08:13.981 06:34:18 -- common/autotest_common.sh@83 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:08:13.981 06:34:18 -- common/autotest_common.sh@85 -- # : 1 00:08:13.981 06:34:18 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:08:13.981 06:34:18 -- common/autotest_common.sh@87 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:08:13.981 06:34:18 -- common/autotest_common.sh@89 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:08:13.981 06:34:18 -- common/autotest_common.sh@91 -- # : 1 00:08:13.981 06:34:18 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:08:13.981 06:34:18 -- common/autotest_common.sh@93 -- # : 1 00:08:13.981 06:34:18 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:08:13.981 06:34:18 -- common/autotest_common.sh@95 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:13.981 06:34:18 -- common/autotest_common.sh@97 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:08:13.981 06:34:18 -- common/autotest_common.sh@99 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:08:13.981 06:34:18 -- common/autotest_common.sh@101 -- # : tcp 00:08:13.981 06:34:18 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:13.981 06:34:18 -- common/autotest_common.sh@103 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:08:13.981 06:34:18 -- common/autotest_common.sh@105 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:08:13.981 06:34:18 -- common/autotest_common.sh@107 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:08:13.981 06:34:18 -- common/autotest_common.sh@109 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:08:13.981 06:34:18 -- common/autotest_common.sh@111 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:08:13.981 06:34:18 -- common/autotest_common.sh@113 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:08:13.981 06:34:18 -- common/autotest_common.sh@115 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:08:13.981 06:34:18 -- common/autotest_common.sh@117 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:13.981 06:34:18 -- common/autotest_common.sh@119 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:08:13.981 06:34:18 -- common/autotest_common.sh@121 -- # : 1 00:08:13.981 06:34:18 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:08:13.981 06:34:18 -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:13.981 06:34:18 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:13.981 06:34:18 -- common/autotest_common.sh@125 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:08:13.981 06:34:18 -- common/autotest_common.sh@127 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:08:13.981 06:34:18 -- common/autotest_common.sh@129 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:08:13.981 06:34:18 -- common/autotest_common.sh@131 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:08:13.981 06:34:18 -- common/autotest_common.sh@133 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:08:13.981 06:34:18 -- common/autotest_common.sh@135 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:08:13.981 06:34:18 -- common/autotest_common.sh@137 -- # : v23.11 00:08:13.981 06:34:18 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:08:13.981 06:34:18 -- common/autotest_common.sh@139 -- # : true 00:08:13.981 06:34:18 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:08:13.981 06:34:18 -- common/autotest_common.sh@141 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:08:13.981 06:34:18 -- common/autotest_common.sh@143 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:08:13.981 06:34:18 -- common/autotest_common.sh@145 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:08:13.981 06:34:18 -- common/autotest_common.sh@147 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:08:13.981 06:34:18 -- common/autotest_common.sh@149 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:08:13.981 06:34:18 -- common/autotest_common.sh@151 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:08:13.981 06:34:18 -- common/autotest_common.sh@153 -- # : e810 00:08:13.981 06:34:18 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:08:13.981 06:34:18 -- common/autotest_common.sh@155 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:08:13.981 06:34:18 -- common/autotest_common.sh@157 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:08:13.981 06:34:18 -- common/autotest_common.sh@159 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:08:13.981 06:34:18 -- common/autotest_common.sh@161 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:08:13.981 06:34:18 -- common/autotest_common.sh@163 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:08:13.981 06:34:18 -- common/autotest_common.sh@166 -- # : 00:08:13.981 06:34:18 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:08:13.981 06:34:18 -- common/autotest_common.sh@168 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:08:13.981 06:34:18 -- common/autotest_common.sh@170 -- # : 0 00:08:13.981 06:34:18 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:13.981 06:34:18 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:13.981 06:34:18 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:13.981 06:34:18 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:13.981 06:34:18 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:13.981 06:34:18 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:13.981 06:34:18 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:13.981 06:34:18 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:13.981 06:34:18 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:13.982 06:34:18 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:13.982 06:34:18 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:13.982 06:34:18 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:13.982 06:34:18 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:13.982 06:34:18 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:13.982 06:34:18 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:08:13.982 06:34:18 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:13.982 06:34:18 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:13.982 06:34:18 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:13.982 06:34:18 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:13.982 06:34:18 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:13.982 06:34:18 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:08:13.982 06:34:18 -- common/autotest_common.sh@199 -- # cat 00:08:13.982 06:34:18 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:08:13.982 06:34:18 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:13.982 06:34:18 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:13.982 06:34:18 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:13.982 06:34:18 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:13.982 06:34:18 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:08:13.982 06:34:18 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:08:13.982 06:34:18 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:13.982 06:34:18 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:13.982 06:34:18 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:13.982 06:34:18 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:13.982 06:34:18 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:13.982 06:34:18 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:13.982 06:34:18 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:13.982 06:34:18 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:13.982 06:34:18 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:13.982 06:34:18 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:13.982 06:34:18 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:13.982 06:34:18 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:13.982 06:34:18 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:08:13.982 06:34:18 -- common/autotest_common.sh@252 -- # export valgrind= 00:08:13.982 06:34:18 -- common/autotest_common.sh@252 -- # valgrind= 00:08:13.982 06:34:18 -- common/autotest_common.sh@258 -- # uname -s 00:08:13.982 06:34:18 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:08:13.982 06:34:18 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:08:13.982 06:34:18 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:08:13.982 06:34:18 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:08:13.982 06:34:18 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:08:13.982 06:34:18 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:08:13.982 06:34:18 -- common/autotest_common.sh@268 -- # MAKE=make 00:08:13.982 06:34:18 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j48 00:08:13.982 06:34:18 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:08:13.982 06:34:18 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:08:13.982 06:34:18 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:08:13.982 06:34:18 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:08:13.982 06:34:18 -- common/autotest_common.sh@289 -- # for i in "$@" 00:08:13.982 06:34:18 -- common/autotest_common.sh@290 -- # case "$i" in 00:08:13.982 06:34:18 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:08:13.982 06:34:18 -- common/autotest_common.sh@307 -- # [[ -z 4079754 ]] 00:08:13.982 06:34:18 -- common/autotest_common.sh@307 -- # kill -0 4079754 00:08:13.982 06:34:18 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:08:13.982 06:34:18 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:08:13.982 06:34:18 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:08:13.982 06:34:18 -- common/autotest_common.sh@320 -- # local mount target_dir 00:08:13.982 06:34:18 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:08:13.982 06:34:18 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:08:13.982 06:34:18 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:08:13.982 06:34:18 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:08:13.982 06:34:18 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.2bLpYl 00:08:13.982 06:34:18 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:13.982 06:34:18 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:08:13.982 06:34:18 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:08:13.982 06:34:18 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.2bLpYl/tests/target /tmp/spdk.2bLpYl 00:08:13.982 06:34:18 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:08:13.982 06:34:18 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:13.982 06:34:18 -- common/autotest_common.sh@316 -- # df -T 00:08:13.982 06:34:18 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:08:13.982 06:34:18 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:08:13.982 06:34:18 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:08:13.982 06:34:18 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:08:13.982 06:34:18 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:08:13.982 06:34:18 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:08:13.982 06:34:18 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:13.982 06:34:18 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:08:13.982 06:34:18 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:08:13.982 06:34:18 -- common/autotest_common.sh@351 -- # avails["$mount"]=996749312 00:08:13.982 06:34:18 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:08:13.982 06:34:18 -- common/autotest_common.sh@352 -- # uses["$mount"]=4287680512 00:08:13.982 06:34:18 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:13.982 06:34:18 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:08:13.982 06:34:18 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:08:13.982 06:34:18 -- common/autotest_common.sh@351 -- # avails["$mount"]=49951965184 00:08:13.982 06:34:18 -- common/autotest_common.sh@351 -- # sizes["$mount"]=61994737664 00:08:13.982 06:34:18 -- common/autotest_common.sh@352 -- # uses["$mount"]=12042772480 00:08:13.982 06:34:18 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:13.982 06:34:18 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:08:13.982 06:34:18 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:08:13.982 06:34:18 -- common/autotest_common.sh@351 -- # avails["$mount"]=30996090880 00:08:13.982 06:34:18 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30997368832 00:08:13.982 06:34:18 -- common/autotest_common.sh@352 -- # uses["$mount"]=1277952 00:08:13.982 06:34:18 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:13.982 06:34:18 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:08:13.982 06:34:18 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:08:13.982 06:34:18 -- common/autotest_common.sh@351 -- # avails["$mount"]=12390187008 00:08:13.982 06:34:18 -- common/autotest_common.sh@351 -- # sizes["$mount"]=12398948352 00:08:13.982 06:34:18 -- common/autotest_common.sh@352 -- # uses["$mount"]=8761344 00:08:13.982 06:34:18 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:13.982 06:34:18 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:08:13.982 06:34:18 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:08:13.982 06:34:18 -- common/autotest_common.sh@351 -- # avails["$mount"]=30996922368 00:08:13.982 06:34:18 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30997368832 00:08:13.982 06:34:18 -- common/autotest_common.sh@352 -- # uses["$mount"]=446464 00:08:13.982 06:34:18 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:13.982 06:34:18 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:08:13.982 06:34:18 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:08:13.982 06:34:18 -- common/autotest_common.sh@351 -- # avails["$mount"]=6199468032 00:08:13.982 06:34:18 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6199472128 00:08:13.982 06:34:18 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:08:13.982 06:34:18 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:13.982 06:34:18 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:08:13.982 * Looking for test storage... 00:08:13.982 06:34:18 -- common/autotest_common.sh@357 -- # local target_space new_size 00:08:13.982 06:34:18 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:08:13.982 06:34:18 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:13.982 06:34:18 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:13.982 06:34:18 -- common/autotest_common.sh@361 -- # mount=/ 00:08:13.982 06:34:18 -- common/autotest_common.sh@363 -- # target_space=49951965184 00:08:13.982 06:34:18 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:08:13.982 06:34:18 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:08:13.982 06:34:18 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:08:13.983 06:34:18 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:08:13.983 06:34:18 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:08:13.983 06:34:18 -- common/autotest_common.sh@370 -- # new_size=14257364992 00:08:13.983 06:34:18 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:13.983 06:34:18 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:13.983 06:34:18 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:13.983 06:34:18 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:13.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:13.983 06:34:18 -- common/autotest_common.sh@378 -- # return 0 00:08:13.983 06:34:18 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:08:13.983 06:34:18 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:08:13.983 06:34:18 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:13.983 06:34:18 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:13.983 06:34:18 -- common/autotest_common.sh@1673 -- # true 00:08:13.983 06:34:18 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:08:13.983 06:34:18 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:13.983 06:34:18 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:13.983 06:34:18 -- common/autotest_common.sh@27 -- # exec 00:08:13.983 06:34:18 -- common/autotest_common.sh@29 -- # exec 00:08:13.983 06:34:18 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:13.983 06:34:18 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:13.983 06:34:18 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:13.983 06:34:18 -- common/autotest_common.sh@18 -- # set -x 00:08:13.983 06:34:18 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:13.983 06:34:18 -- nvmf/common.sh@7 -- # uname -s 00:08:13.983 06:34:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.983 06:34:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.983 06:34:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.983 06:34:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.983 06:34:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.983 06:34:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.983 06:34:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.983 06:34:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.983 06:34:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.983 06:34:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.983 06:34:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:13.983 06:34:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:13.983 06:34:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.983 06:34:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.983 06:34:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:13.983 06:34:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.983 06:34:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:13.983 06:34:18 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.983 06:34:18 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.983 06:34:18 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.983 06:34:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.983 06:34:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.983 06:34:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.983 06:34:18 -- paths/export.sh@5 -- # export PATH 00:08:13.983 06:34:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.983 06:34:18 -- nvmf/common.sh@47 -- # : 0 00:08:13.983 06:34:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:13.983 06:34:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:13.983 06:34:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.983 06:34:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.983 06:34:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.983 06:34:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:13.983 06:34:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:13.983 06:34:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:13.983 06:34:18 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:13.983 06:34:18 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:13.983 06:34:18 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:13.983 06:34:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:13.983 06:34:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.983 06:34:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:13.983 06:34:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:13.983 06:34:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:13.983 06:34:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.983 06:34:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:13.983 06:34:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.983 06:34:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:13.983 06:34:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:13.983 06:34:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:13.983 06:34:18 -- common/autotest_common.sh@10 -- # set +x 00:08:16.514 06:34:20 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:16.514 06:34:20 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:16.514 06:34:20 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:16.514 06:34:20 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:16.514 06:34:20 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:16.514 06:34:20 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:16.514 06:34:20 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:16.514 06:34:20 -- nvmf/common.sh@295 -- # net_devs=() 00:08:16.514 06:34:20 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:16.514 06:34:20 -- nvmf/common.sh@296 -- # e810=() 00:08:16.514 06:34:20 -- nvmf/common.sh@296 -- # local -ga e810 00:08:16.514 06:34:20 -- nvmf/common.sh@297 -- # x722=() 00:08:16.514 06:34:20 -- nvmf/common.sh@297 -- # local -ga x722 00:08:16.514 06:34:20 -- nvmf/common.sh@298 -- # mlx=() 00:08:16.514 06:34:20 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:16.514 06:34:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:16.514 06:34:20 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:16.514 06:34:20 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:16.514 06:34:20 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:16.514 06:34:20 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:16.514 06:34:20 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:16.514 06:34:20 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:16.514 06:34:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:16.514 06:34:20 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:16.514 06:34:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:16.514 06:34:20 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:16.514 06:34:20 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:16.514 06:34:20 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:16.514 06:34:20 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:16.514 06:34:20 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:16.514 06:34:20 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:16.514 06:34:20 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:16.514 06:34:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:16.514 06:34:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:16.514 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:16.514 06:34:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:16.514 06:34:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:16.514 06:34:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.514 06:34:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.514 06:34:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:16.514 06:34:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:16.514 06:34:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:16.514 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:16.514 06:34:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:16.514 06:34:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:16.514 06:34:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.514 06:34:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.514 06:34:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:16.514 06:34:20 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:16.514 06:34:20 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:16.514 06:34:20 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:16.514 06:34:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:16.514 06:34:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.514 06:34:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:16.514 06:34:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.514 06:34:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:16.514 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:16.514 06:34:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.514 06:34:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:16.514 06:34:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.514 06:34:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:16.514 06:34:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.514 06:34:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:16.514 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:16.514 06:34:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.514 06:34:20 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:16.514 06:34:20 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:16.514 06:34:20 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:16.514 06:34:20 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:16.514 06:34:20 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:16.514 06:34:20 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:16.514 06:34:20 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:16.514 06:34:20 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:16.514 06:34:20 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:16.515 06:34:20 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:16.515 06:34:20 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:16.515 06:34:20 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:16.515 06:34:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:16.515 06:34:20 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:16.515 06:34:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:16.515 06:34:20 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:16.515 06:34:20 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:16.515 06:34:20 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:16.515 06:34:20 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:16.515 06:34:20 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:16.515 06:34:20 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:16.515 06:34:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:16.515 06:34:20 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:16.515 06:34:20 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:16.515 06:34:20 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:16.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:16.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:08:16.515 00:08:16.515 --- 10.0.0.2 ping statistics --- 00:08:16.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.515 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:08:16.515 06:34:20 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:16.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:16.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:08:16.515 00:08:16.515 --- 10.0.0.1 ping statistics --- 00:08:16.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.515 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:08:16.515 06:34:20 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:16.515 06:34:20 -- nvmf/common.sh@411 -- # return 0 00:08:16.515 06:34:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:16.515 06:34:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:16.515 06:34:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:16.515 06:34:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:16.515 06:34:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:16.515 06:34:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:16.515 06:34:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:16.515 06:34:20 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:16.515 06:34:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:16.515 06:34:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:16.515 06:34:20 -- common/autotest_common.sh@10 -- # set +x 00:08:16.515 ************************************ 00:08:16.515 START TEST nvmf_filesystem_no_in_capsule 00:08:16.515 ************************************ 00:08:16.515 06:34:20 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:08:16.515 06:34:20 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:16.515 06:34:20 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:16.515 06:34:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:16.515 06:34:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:16.515 06:34:20 -- common/autotest_common.sh@10 -- # set +x 00:08:16.515 06:34:20 -- nvmf/common.sh@470 -- # nvmfpid=4081386 00:08:16.515 06:34:20 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:16.515 06:34:20 -- nvmf/common.sh@471 -- # waitforlisten 4081386 00:08:16.515 06:34:20 -- common/autotest_common.sh@817 -- # '[' -z 4081386 ']' 00:08:16.515 06:34:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.515 06:34:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:16.515 06:34:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.515 06:34:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:16.515 06:34:20 -- common/autotest_common.sh@10 -- # set +x 00:08:16.515 [2024-04-17 06:34:20.849946] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:08:16.515 [2024-04-17 06:34:20.850031] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.515 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.515 [2024-04-17 06:34:20.920832] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:16.515 [2024-04-17 06:34:21.014347] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.515 [2024-04-17 06:34:21.014407] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.515 [2024-04-17 06:34:21.014425] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.515 [2024-04-17 06:34:21.014438] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.515 [2024-04-17 06:34:21.014450] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.515 [2024-04-17 06:34:21.014552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.515 [2024-04-17 06:34:21.014614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:16.515 [2024-04-17 06:34:21.014735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:16.515 [2024-04-17 06:34:21.014737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.820 06:34:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:16.820 06:34:21 -- common/autotest_common.sh@850 -- # return 0 00:08:16.820 06:34:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:16.820 06:34:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:16.820 06:34:21 -- common/autotest_common.sh@10 -- # set +x 00:08:16.820 06:34:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.820 06:34:21 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:16.820 06:34:21 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:16.820 06:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.820 06:34:21 -- common/autotest_common.sh@10 -- # set +x 00:08:16.820 [2024-04-17 06:34:21.168989] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.820 06:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.820 06:34:21 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:16.820 06:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.820 06:34:21 -- common/autotest_common.sh@10 -- # set +x 00:08:16.820 Malloc1 00:08:16.820 06:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.820 06:34:21 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:16.820 06:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.820 06:34:21 -- common/autotest_common.sh@10 -- # set +x 00:08:16.820 06:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.820 06:34:21 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:16.820 06:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.820 06:34:21 -- common/autotest_common.sh@10 -- # set +x 00:08:16.820 06:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.820 06:34:21 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:16.820 06:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.820 06:34:21 -- common/autotest_common.sh@10 -- # set +x 00:08:16.820 [2024-04-17 06:34:21.341415] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:16.820 06:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.820 06:34:21 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:16.820 06:34:21 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:08:16.820 06:34:21 -- common/autotest_common.sh@1365 -- # local bdev_info 00:08:16.820 06:34:21 -- common/autotest_common.sh@1366 -- # local bs 00:08:16.820 06:34:21 -- common/autotest_common.sh@1367 -- # local nb 00:08:16.820 06:34:21 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:16.820 06:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.820 06:34:21 -- common/autotest_common.sh@10 -- # set +x 00:08:16.820 06:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.820 06:34:21 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:08:16.820 { 00:08:16.820 "name": "Malloc1", 00:08:16.820 "aliases": [ 00:08:16.820 "8d22b11a-4bbc-4912-9c05-778d5fa72702" 00:08:16.820 ], 00:08:16.820 "product_name": "Malloc disk", 00:08:16.820 "block_size": 512, 00:08:16.820 "num_blocks": 1048576, 00:08:16.820 "uuid": "8d22b11a-4bbc-4912-9c05-778d5fa72702", 00:08:16.820 "assigned_rate_limits": { 00:08:16.820 "rw_ios_per_sec": 0, 00:08:16.820 "rw_mbytes_per_sec": 0, 00:08:16.820 "r_mbytes_per_sec": 0, 00:08:16.820 "w_mbytes_per_sec": 0 00:08:16.820 }, 00:08:16.820 "claimed": true, 00:08:16.820 "claim_type": "exclusive_write", 00:08:16.820 "zoned": false, 00:08:16.820 "supported_io_types": { 00:08:16.820 "read": true, 00:08:16.820 "write": true, 00:08:16.820 "unmap": true, 00:08:16.820 "write_zeroes": true, 00:08:16.820 "flush": true, 00:08:16.820 "reset": true, 00:08:16.820 "compare": false, 00:08:16.820 "compare_and_write": false, 00:08:16.820 "abort": true, 00:08:16.820 "nvme_admin": false, 00:08:16.820 "nvme_io": false 00:08:16.820 }, 00:08:16.820 "memory_domains": [ 00:08:16.820 { 00:08:16.820 "dma_device_id": "system", 00:08:16.820 "dma_device_type": 1 00:08:16.820 }, 00:08:16.820 { 00:08:16.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.820 "dma_device_type": 2 00:08:16.820 } 00:08:16.820 ], 00:08:16.820 "driver_specific": {} 00:08:16.820 } 00:08:16.820 ]' 00:08:16.820 06:34:21 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:08:17.078 06:34:21 -- common/autotest_common.sh@1369 -- # bs=512 00:08:17.078 06:34:21 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:08:17.078 06:34:21 -- common/autotest_common.sh@1370 -- # nb=1048576 00:08:17.078 06:34:21 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:08:17.078 06:34:21 -- common/autotest_common.sh@1374 -- # echo 512 00:08:17.078 06:34:21 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:17.078 06:34:21 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:17.643 06:34:22 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:17.643 06:34:22 -- common/autotest_common.sh@1184 -- # local i=0 00:08:17.643 06:34:22 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:17.643 06:34:22 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:17.643 06:34:22 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:19.540 06:34:24 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:19.540 06:34:24 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:19.540 06:34:24 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:19.540 06:34:24 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:19.540 06:34:24 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:19.540 06:34:24 -- common/autotest_common.sh@1194 -- # return 0 00:08:19.540 06:34:24 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:19.540 06:34:24 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:19.540 06:34:24 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:19.540 06:34:24 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:19.540 06:34:24 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:19.540 06:34:24 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:19.540 06:34:24 -- setup/common.sh@80 -- # echo 536870912 00:08:19.540 06:34:24 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:19.540 06:34:24 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:19.541 06:34:24 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:19.541 06:34:24 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:19.798 06:34:24 -- target/filesystem.sh@69 -- # partprobe 00:08:20.055 06:34:24 -- target/filesystem.sh@70 -- # sleep 1 00:08:21.427 06:34:25 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:21.427 06:34:25 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:21.427 06:34:25 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:21.427 06:34:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:21.427 06:34:25 -- common/autotest_common.sh@10 -- # set +x 00:08:21.427 ************************************ 00:08:21.427 START TEST filesystem_ext4 00:08:21.427 ************************************ 00:08:21.427 06:34:25 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:21.427 06:34:25 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:21.427 06:34:25 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:21.427 06:34:25 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:21.427 06:34:25 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:21.427 06:34:25 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:21.427 06:34:25 -- common/autotest_common.sh@914 -- # local i=0 00:08:21.427 06:34:25 -- common/autotest_common.sh@915 -- # local force 00:08:21.427 06:34:25 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:21.427 06:34:25 -- common/autotest_common.sh@918 -- # force=-F 00:08:21.427 06:34:25 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:21.427 mke2fs 1.46.5 (30-Dec-2021) 00:08:21.427 Discarding device blocks: 0/522240 done 00:08:21.427 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:21.427 Filesystem UUID: 96b08ec2-f381-45cd-8225-3011e6ef6b37 00:08:21.427 Superblock backups stored on blocks: 00:08:21.427 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:21.427 00:08:21.427 Allocating group tables: 0/64 done 00:08:21.427 Writing inode tables: 0/64 done 00:08:21.427 Creating journal (8192 blocks): done 00:08:21.427 Writing superblocks and filesystem accounting information: 0/64 done 00:08:21.427 00:08:21.427 06:34:25 -- common/autotest_common.sh@931 -- # return 0 00:08:21.427 06:34:25 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:22.391 06:34:26 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:22.391 06:34:26 -- target/filesystem.sh@25 -- # sync 00:08:22.391 06:34:26 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:22.391 06:34:26 -- target/filesystem.sh@27 -- # sync 00:08:22.391 06:34:26 -- target/filesystem.sh@29 -- # i=0 00:08:22.391 06:34:26 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:22.391 06:34:26 -- target/filesystem.sh@37 -- # kill -0 4081386 00:08:22.391 06:34:26 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:22.391 06:34:26 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:22.391 06:34:26 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:22.391 06:34:26 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:22.391 00:08:22.391 real 0m1.054s 00:08:22.391 user 0m0.021s 00:08:22.391 sys 0m0.028s 00:08:22.391 06:34:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:22.391 06:34:26 -- common/autotest_common.sh@10 -- # set +x 00:08:22.391 ************************************ 00:08:22.391 END TEST filesystem_ext4 00:08:22.391 ************************************ 00:08:22.391 06:34:26 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:22.391 06:34:26 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:22.391 06:34:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:22.391 06:34:26 -- common/autotest_common.sh@10 -- # set +x 00:08:22.391 ************************************ 00:08:22.391 START TEST filesystem_btrfs 00:08:22.391 ************************************ 00:08:22.391 06:34:26 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:22.391 06:34:26 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:22.391 06:34:26 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:22.391 06:34:26 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:22.391 06:34:26 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:22.391 06:34:26 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:22.391 06:34:26 -- common/autotest_common.sh@914 -- # local i=0 00:08:22.391 06:34:26 -- common/autotest_common.sh@915 -- # local force 00:08:22.391 06:34:26 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:22.391 06:34:26 -- common/autotest_common.sh@920 -- # force=-f 00:08:22.391 06:34:26 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:22.649 btrfs-progs v6.6.2 00:08:22.649 See https://btrfs.readthedocs.io for more information. 00:08:22.649 00:08:22.649 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:22.649 NOTE: several default settings have changed in version 5.15, please make sure 00:08:22.649 this does not affect your deployments: 00:08:22.649 - DUP for metadata (-m dup) 00:08:22.649 - enabled no-holes (-O no-holes) 00:08:22.649 - enabled free-space-tree (-R free-space-tree) 00:08:22.649 00:08:22.649 Label: (null) 00:08:22.649 UUID: ef9c09ea-7699-46bd-ae19-51cee623cb8c 00:08:22.649 Node size: 16384 00:08:22.649 Sector size: 4096 00:08:22.649 Filesystem size: 510.00MiB 00:08:22.649 Block group profiles: 00:08:22.649 Data: single 8.00MiB 00:08:22.649 Metadata: DUP 32.00MiB 00:08:22.649 System: DUP 8.00MiB 00:08:22.649 SSD detected: yes 00:08:22.649 Zoned device: no 00:08:22.649 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:22.649 Runtime features: free-space-tree 00:08:22.649 Checksum: crc32c 00:08:22.649 Number of devices: 1 00:08:22.649 Devices: 00:08:22.649 ID SIZE PATH 00:08:22.649 1 510.00MiB /dev/nvme0n1p1 00:08:22.649 00:08:22.649 06:34:27 -- common/autotest_common.sh@931 -- # return 0 00:08:22.649 06:34:27 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:23.582 06:34:27 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:23.582 06:34:27 -- target/filesystem.sh@25 -- # sync 00:08:23.582 06:34:27 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:23.582 06:34:27 -- target/filesystem.sh@27 -- # sync 00:08:23.582 06:34:27 -- target/filesystem.sh@29 -- # i=0 00:08:23.582 06:34:27 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:23.582 06:34:27 -- target/filesystem.sh@37 -- # kill -0 4081386 00:08:23.582 06:34:27 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:23.582 06:34:27 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:23.582 06:34:27 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:23.582 06:34:27 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:23.582 00:08:23.582 real 0m0.982s 00:08:23.582 user 0m0.018s 00:08:23.582 sys 0m0.039s 00:08:23.582 06:34:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:23.582 06:34:27 -- common/autotest_common.sh@10 -- # set +x 00:08:23.582 ************************************ 00:08:23.582 END TEST filesystem_btrfs 00:08:23.582 ************************************ 00:08:23.582 06:34:27 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:23.582 06:34:27 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:23.582 06:34:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:23.582 06:34:27 -- common/autotest_common.sh@10 -- # set +x 00:08:23.582 ************************************ 00:08:23.582 START TEST filesystem_xfs 00:08:23.582 ************************************ 00:08:23.582 06:34:28 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:08:23.582 06:34:28 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:23.582 06:34:28 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:23.582 06:34:28 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:23.582 06:34:28 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:23.582 06:34:28 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:23.582 06:34:28 -- common/autotest_common.sh@914 -- # local i=0 00:08:23.582 06:34:28 -- common/autotest_common.sh@915 -- # local force 00:08:23.582 06:34:28 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:23.582 06:34:28 -- common/autotest_common.sh@920 -- # force=-f 00:08:23.582 06:34:28 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:23.582 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:23.582 = sectsz=512 attr=2, projid32bit=1 00:08:23.582 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:23.582 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:23.582 data = bsize=4096 blocks=130560, imaxpct=25 00:08:23.582 = sunit=0 swidth=0 blks 00:08:23.582 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:23.582 log =internal log bsize=4096 blocks=16384, version=2 00:08:23.582 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:23.582 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:24.515 Discarding blocks...Done. 00:08:24.515 06:34:29 -- common/autotest_common.sh@931 -- # return 0 00:08:24.515 06:34:29 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:27.043 06:34:31 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:27.043 06:34:31 -- target/filesystem.sh@25 -- # sync 00:08:27.043 06:34:31 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:27.043 06:34:31 -- target/filesystem.sh@27 -- # sync 00:08:27.043 06:34:31 -- target/filesystem.sh@29 -- # i=0 00:08:27.043 06:34:31 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:27.043 06:34:31 -- target/filesystem.sh@37 -- # kill -0 4081386 00:08:27.043 06:34:31 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:27.043 06:34:31 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:27.043 06:34:31 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:27.043 06:34:31 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:27.043 00:08:27.043 real 0m3.381s 00:08:27.043 user 0m0.014s 00:08:27.043 sys 0m0.042s 00:08:27.043 06:34:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:27.043 06:34:31 -- common/autotest_common.sh@10 -- # set +x 00:08:27.043 ************************************ 00:08:27.043 END TEST filesystem_xfs 00:08:27.043 ************************************ 00:08:27.043 06:34:31 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:27.043 06:34:31 -- target/filesystem.sh@93 -- # sync 00:08:27.043 06:34:31 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:27.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.043 06:34:31 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:27.043 06:34:31 -- common/autotest_common.sh@1205 -- # local i=0 00:08:27.043 06:34:31 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:27.043 06:34:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:27.043 06:34:31 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:27.043 06:34:31 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:27.043 06:34:31 -- common/autotest_common.sh@1217 -- # return 0 00:08:27.043 06:34:31 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:27.043 06:34:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:27.043 06:34:31 -- common/autotest_common.sh@10 -- # set +x 00:08:27.043 06:34:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:27.043 06:34:31 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:27.043 06:34:31 -- target/filesystem.sh@101 -- # killprocess 4081386 00:08:27.043 06:34:31 -- common/autotest_common.sh@936 -- # '[' -z 4081386 ']' 00:08:27.043 06:34:31 -- common/autotest_common.sh@940 -- # kill -0 4081386 00:08:27.043 06:34:31 -- common/autotest_common.sh@941 -- # uname 00:08:27.043 06:34:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:27.043 06:34:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4081386 00:08:27.043 06:34:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:27.043 06:34:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:27.043 06:34:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4081386' 00:08:27.043 killing process with pid 4081386 00:08:27.043 06:34:31 -- common/autotest_common.sh@955 -- # kill 4081386 00:08:27.043 06:34:31 -- common/autotest_common.sh@960 -- # wait 4081386 00:08:27.611 06:34:32 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:27.611 00:08:27.611 real 0m11.228s 00:08:27.611 user 0m43.082s 00:08:27.611 sys 0m1.717s 00:08:27.611 06:34:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:27.611 06:34:32 -- common/autotest_common.sh@10 -- # set +x 00:08:27.611 ************************************ 00:08:27.611 END TEST nvmf_filesystem_no_in_capsule 00:08:27.611 ************************************ 00:08:27.611 06:34:32 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:27.611 06:34:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:27.611 06:34:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:27.611 06:34:32 -- common/autotest_common.sh@10 -- # set +x 00:08:27.611 ************************************ 00:08:27.611 START TEST nvmf_filesystem_in_capsule 00:08:27.611 ************************************ 00:08:27.611 06:34:32 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:08:27.611 06:34:32 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:27.611 06:34:32 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:27.611 06:34:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:27.611 06:34:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:27.611 06:34:32 -- common/autotest_common.sh@10 -- # set +x 00:08:27.611 06:34:32 -- nvmf/common.sh@470 -- # nvmfpid=4082967 00:08:27.611 06:34:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:27.611 06:34:32 -- nvmf/common.sh@471 -- # waitforlisten 4082967 00:08:27.611 06:34:32 -- common/autotest_common.sh@817 -- # '[' -z 4082967 ']' 00:08:27.611 06:34:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.611 06:34:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:27.611 06:34:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.611 06:34:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:27.611 06:34:32 -- common/autotest_common.sh@10 -- # set +x 00:08:27.611 [2024-04-17 06:34:32.207833] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:08:27.611 [2024-04-17 06:34:32.207918] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.870 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.870 [2024-04-17 06:34:32.278359] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:27.870 [2024-04-17 06:34:32.372944] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.870 [2024-04-17 06:34:32.373004] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.870 [2024-04-17 06:34:32.373020] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.870 [2024-04-17 06:34:32.373033] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.870 [2024-04-17 06:34:32.373045] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.870 [2024-04-17 06:34:32.373127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.870 [2024-04-17 06:34:32.373148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:27.870 [2024-04-17 06:34:32.373262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:27.870 [2024-04-17 06:34:32.373266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.129 06:34:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:28.129 06:34:32 -- common/autotest_common.sh@850 -- # return 0 00:08:28.129 06:34:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:28.129 06:34:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:28.129 06:34:32 -- common/autotest_common.sh@10 -- # set +x 00:08:28.129 06:34:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.129 06:34:32 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:28.129 06:34:32 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:28.129 06:34:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:28.129 06:34:32 -- common/autotest_common.sh@10 -- # set +x 00:08:28.129 [2024-04-17 06:34:32.527062] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.129 06:34:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:28.129 06:34:32 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:28.129 06:34:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:28.129 06:34:32 -- common/autotest_common.sh@10 -- # set +x 00:08:28.129 Malloc1 00:08:28.129 06:34:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:28.129 06:34:32 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:28.129 06:34:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:28.129 06:34:32 -- common/autotest_common.sh@10 -- # set +x 00:08:28.129 06:34:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:28.129 06:34:32 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:28.129 06:34:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:28.129 06:34:32 -- common/autotest_common.sh@10 -- # set +x 00:08:28.129 06:34:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:28.129 06:34:32 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:28.129 06:34:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:28.129 06:34:32 -- common/autotest_common.sh@10 -- # set +x 00:08:28.129 [2024-04-17 06:34:32.717492] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.129 06:34:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:28.129 06:34:32 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:28.129 06:34:32 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:08:28.129 06:34:32 -- common/autotest_common.sh@1365 -- # local bdev_info 00:08:28.129 06:34:32 -- common/autotest_common.sh@1366 -- # local bs 00:08:28.129 06:34:32 -- common/autotest_common.sh@1367 -- # local nb 00:08:28.129 06:34:32 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:28.129 06:34:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:28.129 06:34:32 -- common/autotest_common.sh@10 -- # set +x 00:08:28.387 06:34:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:28.387 06:34:32 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:08:28.387 { 00:08:28.387 "name": "Malloc1", 00:08:28.387 "aliases": [ 00:08:28.387 "86caba41-20ac-498b-bc06-2daafabfb2d3" 00:08:28.387 ], 00:08:28.387 "product_name": "Malloc disk", 00:08:28.387 "block_size": 512, 00:08:28.387 "num_blocks": 1048576, 00:08:28.387 "uuid": "86caba41-20ac-498b-bc06-2daafabfb2d3", 00:08:28.387 "assigned_rate_limits": { 00:08:28.387 "rw_ios_per_sec": 0, 00:08:28.387 "rw_mbytes_per_sec": 0, 00:08:28.387 "r_mbytes_per_sec": 0, 00:08:28.387 "w_mbytes_per_sec": 0 00:08:28.387 }, 00:08:28.387 "claimed": true, 00:08:28.387 "claim_type": "exclusive_write", 00:08:28.387 "zoned": false, 00:08:28.387 "supported_io_types": { 00:08:28.387 "read": true, 00:08:28.387 "write": true, 00:08:28.387 "unmap": true, 00:08:28.387 "write_zeroes": true, 00:08:28.387 "flush": true, 00:08:28.387 "reset": true, 00:08:28.387 "compare": false, 00:08:28.387 "compare_and_write": false, 00:08:28.387 "abort": true, 00:08:28.387 "nvme_admin": false, 00:08:28.387 "nvme_io": false 00:08:28.387 }, 00:08:28.387 "memory_domains": [ 00:08:28.387 { 00:08:28.387 "dma_device_id": "system", 00:08:28.387 "dma_device_type": 1 00:08:28.387 }, 00:08:28.387 { 00:08:28.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.387 "dma_device_type": 2 00:08:28.387 } 00:08:28.387 ], 00:08:28.387 "driver_specific": {} 00:08:28.387 } 00:08:28.387 ]' 00:08:28.387 06:34:32 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:08:28.387 06:34:32 -- common/autotest_common.sh@1369 -- # bs=512 00:08:28.387 06:34:32 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:08:28.387 06:34:32 -- common/autotest_common.sh@1370 -- # nb=1048576 00:08:28.387 06:34:32 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:08:28.387 06:34:32 -- common/autotest_common.sh@1374 -- # echo 512 00:08:28.387 06:34:32 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:28.387 06:34:32 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:28.953 06:34:33 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:28.953 06:34:33 -- common/autotest_common.sh@1184 -- # local i=0 00:08:28.953 06:34:33 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:28.953 06:34:33 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:28.953 06:34:33 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:30.916 06:34:35 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:30.916 06:34:35 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:30.916 06:34:35 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:30.916 06:34:35 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:30.916 06:34:35 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:30.916 06:34:35 -- common/autotest_common.sh@1194 -- # return 0 00:08:30.916 06:34:35 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:30.916 06:34:35 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:30.916 06:34:35 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:30.916 06:34:35 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:30.916 06:34:35 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:30.916 06:34:35 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:30.916 06:34:35 -- setup/common.sh@80 -- # echo 536870912 00:08:30.916 06:34:35 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:30.916 06:34:35 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:30.916 06:34:35 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:30.916 06:34:35 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:31.482 06:34:35 -- target/filesystem.sh@69 -- # partprobe 00:08:31.740 06:34:36 -- target/filesystem.sh@70 -- # sleep 1 00:08:32.673 06:34:37 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:32.673 06:34:37 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:32.673 06:34:37 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:32.673 06:34:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.673 06:34:37 -- common/autotest_common.sh@10 -- # set +x 00:08:32.931 ************************************ 00:08:32.931 START TEST filesystem_in_capsule_ext4 00:08:32.931 ************************************ 00:08:32.931 06:34:37 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:32.931 06:34:37 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:32.931 06:34:37 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:32.931 06:34:37 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:32.931 06:34:37 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:32.931 06:34:37 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:32.931 06:34:37 -- common/autotest_common.sh@914 -- # local i=0 00:08:32.931 06:34:37 -- common/autotest_common.sh@915 -- # local force 00:08:32.931 06:34:37 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:32.931 06:34:37 -- common/autotest_common.sh@918 -- # force=-F 00:08:32.931 06:34:37 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:32.931 mke2fs 1.46.5 (30-Dec-2021) 00:08:32.931 Discarding device blocks: 0/522240 done 00:08:32.931 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:32.931 Filesystem UUID: 795c0eba-1eb6-4823-b708-03570069a49e 00:08:32.931 Superblock backups stored on blocks: 00:08:32.931 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:32.931 00:08:32.931 Allocating group tables: 0/64 done 00:08:32.931 Writing inode tables: 0/64 done 00:08:34.829 Creating journal (8192 blocks): done 00:08:34.829 Writing superblocks and filesystem accounting information: 0/64 done 00:08:34.829 00:08:34.829 06:34:39 -- common/autotest_common.sh@931 -- # return 0 00:08:34.829 06:34:39 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:35.763 06:34:40 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:35.763 06:34:40 -- target/filesystem.sh@25 -- # sync 00:08:35.763 06:34:40 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:35.763 06:34:40 -- target/filesystem.sh@27 -- # sync 00:08:35.763 06:34:40 -- target/filesystem.sh@29 -- # i=0 00:08:35.763 06:34:40 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:35.763 06:34:40 -- target/filesystem.sh@37 -- # kill -0 4082967 00:08:35.763 06:34:40 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:35.763 06:34:40 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:35.763 06:34:40 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:35.763 06:34:40 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:35.763 00:08:35.763 real 0m2.960s 00:08:35.763 user 0m0.017s 00:08:35.763 sys 0m0.033s 00:08:35.763 06:34:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:35.763 06:34:40 -- common/autotest_common.sh@10 -- # set +x 00:08:35.763 ************************************ 00:08:35.763 END TEST filesystem_in_capsule_ext4 00:08:35.763 ************************************ 00:08:35.763 06:34:40 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:35.763 06:34:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:35.763 06:34:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:35.763 06:34:40 -- common/autotest_common.sh@10 -- # set +x 00:08:36.021 ************************************ 00:08:36.021 START TEST filesystem_in_capsule_btrfs 00:08:36.021 ************************************ 00:08:36.021 06:34:40 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:36.021 06:34:40 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:36.021 06:34:40 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:36.021 06:34:40 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:36.021 06:34:40 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:36.021 06:34:40 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:36.021 06:34:40 -- common/autotest_common.sh@914 -- # local i=0 00:08:36.021 06:34:40 -- common/autotest_common.sh@915 -- # local force 00:08:36.021 06:34:40 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:36.021 06:34:40 -- common/autotest_common.sh@920 -- # force=-f 00:08:36.021 06:34:40 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:36.278 btrfs-progs v6.6.2 00:08:36.278 See https://btrfs.readthedocs.io for more information. 00:08:36.278 00:08:36.278 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:36.278 NOTE: several default settings have changed in version 5.15, please make sure 00:08:36.278 this does not affect your deployments: 00:08:36.278 - DUP for metadata (-m dup) 00:08:36.278 - enabled no-holes (-O no-holes) 00:08:36.278 - enabled free-space-tree (-R free-space-tree) 00:08:36.278 00:08:36.278 Label: (null) 00:08:36.278 UUID: 769f44fe-b47b-40b9-96f5-bfa76b2c141f 00:08:36.278 Node size: 16384 00:08:36.278 Sector size: 4096 00:08:36.278 Filesystem size: 510.00MiB 00:08:36.278 Block group profiles: 00:08:36.278 Data: single 8.00MiB 00:08:36.278 Metadata: DUP 32.00MiB 00:08:36.278 System: DUP 8.00MiB 00:08:36.278 SSD detected: yes 00:08:36.279 Zoned device: no 00:08:36.279 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:36.279 Runtime features: free-space-tree 00:08:36.279 Checksum: crc32c 00:08:36.279 Number of devices: 1 00:08:36.279 Devices: 00:08:36.279 ID SIZE PATH 00:08:36.279 1 510.00MiB /dev/nvme0n1p1 00:08:36.279 00:08:36.279 06:34:40 -- common/autotest_common.sh@931 -- # return 0 00:08:36.279 06:34:40 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:37.211 06:34:41 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:37.211 06:34:41 -- target/filesystem.sh@25 -- # sync 00:08:37.211 06:34:41 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:37.211 06:34:41 -- target/filesystem.sh@27 -- # sync 00:08:37.211 06:34:41 -- target/filesystem.sh@29 -- # i=0 00:08:37.211 06:34:41 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:37.211 06:34:41 -- target/filesystem.sh@37 -- # kill -0 4082967 00:08:37.211 06:34:41 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:37.211 06:34:41 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:37.211 06:34:41 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:37.211 06:34:41 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:37.211 00:08:37.211 real 0m1.184s 00:08:37.211 user 0m0.012s 00:08:37.211 sys 0m0.051s 00:08:37.211 06:34:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:37.211 06:34:41 -- common/autotest_common.sh@10 -- # set +x 00:08:37.211 ************************************ 00:08:37.211 END TEST filesystem_in_capsule_btrfs 00:08:37.211 ************************************ 00:08:37.211 06:34:41 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:37.211 06:34:41 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:37.211 06:34:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:37.211 06:34:41 -- common/autotest_common.sh@10 -- # set +x 00:08:37.211 ************************************ 00:08:37.211 START TEST filesystem_in_capsule_xfs 00:08:37.211 ************************************ 00:08:37.211 06:34:41 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:08:37.211 06:34:41 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:37.211 06:34:41 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:37.211 06:34:41 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:37.211 06:34:41 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:37.211 06:34:41 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:37.211 06:34:41 -- common/autotest_common.sh@914 -- # local i=0 00:08:37.211 06:34:41 -- common/autotest_common.sh@915 -- # local force 00:08:37.211 06:34:41 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:37.211 06:34:41 -- common/autotest_common.sh@920 -- # force=-f 00:08:37.211 06:34:41 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:37.468 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:37.468 = sectsz=512 attr=2, projid32bit=1 00:08:37.468 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:37.468 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:37.468 data = bsize=4096 blocks=130560, imaxpct=25 00:08:37.468 = sunit=0 swidth=0 blks 00:08:37.468 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:37.468 log =internal log bsize=4096 blocks=16384, version=2 00:08:37.468 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:37.468 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:38.032 Discarding blocks...Done. 00:08:38.032 06:34:42 -- common/autotest_common.sh@931 -- # return 0 00:08:38.032 06:34:42 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:40.556 06:34:45 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:40.556 06:34:45 -- target/filesystem.sh@25 -- # sync 00:08:40.556 06:34:45 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:40.556 06:34:45 -- target/filesystem.sh@27 -- # sync 00:08:40.556 06:34:45 -- target/filesystem.sh@29 -- # i=0 00:08:40.556 06:34:45 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:40.556 06:34:45 -- target/filesystem.sh@37 -- # kill -0 4082967 00:08:40.556 06:34:45 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:40.556 06:34:45 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:40.556 06:34:45 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:40.556 06:34:45 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:40.556 00:08:40.556 real 0m3.434s 00:08:40.556 user 0m0.015s 00:08:40.556 sys 0m0.040s 00:08:40.556 06:34:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:40.556 06:34:45 -- common/autotest_common.sh@10 -- # set +x 00:08:40.556 ************************************ 00:08:40.556 END TEST filesystem_in_capsule_xfs 00:08:40.556 ************************************ 00:08:40.813 06:34:45 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:40.813 06:34:45 -- target/filesystem.sh@93 -- # sync 00:08:40.813 06:34:45 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:40.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.814 06:34:45 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:40.814 06:34:45 -- common/autotest_common.sh@1205 -- # local i=0 00:08:40.814 06:34:45 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:40.814 06:34:45 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:40.814 06:34:45 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:40.814 06:34:45 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:40.814 06:34:45 -- common/autotest_common.sh@1217 -- # return 0 00:08:40.814 06:34:45 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:40.814 06:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:40.814 06:34:45 -- common/autotest_common.sh@10 -- # set +x 00:08:40.814 06:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:40.814 06:34:45 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:40.814 06:34:45 -- target/filesystem.sh@101 -- # killprocess 4082967 00:08:40.814 06:34:45 -- common/autotest_common.sh@936 -- # '[' -z 4082967 ']' 00:08:40.814 06:34:45 -- common/autotest_common.sh@940 -- # kill -0 4082967 00:08:40.814 06:34:45 -- common/autotest_common.sh@941 -- # uname 00:08:40.814 06:34:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:40.814 06:34:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4082967 00:08:40.814 06:34:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:40.814 06:34:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:40.814 06:34:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4082967' 00:08:40.814 killing process with pid 4082967 00:08:40.814 06:34:45 -- common/autotest_common.sh@955 -- # kill 4082967 00:08:40.814 06:34:45 -- common/autotest_common.sh@960 -- # wait 4082967 00:08:41.379 06:34:45 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:41.379 00:08:41.379 real 0m13.687s 00:08:41.379 user 0m52.563s 00:08:41.379 sys 0m1.992s 00:08:41.379 06:34:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:41.379 06:34:45 -- common/autotest_common.sh@10 -- # set +x 00:08:41.379 ************************************ 00:08:41.379 END TEST nvmf_filesystem_in_capsule 00:08:41.379 ************************************ 00:08:41.379 06:34:45 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:41.379 06:34:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:41.379 06:34:45 -- nvmf/common.sh@117 -- # sync 00:08:41.379 06:34:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:41.379 06:34:45 -- nvmf/common.sh@120 -- # set +e 00:08:41.379 06:34:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:41.379 06:34:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:41.379 rmmod nvme_tcp 00:08:41.379 rmmod nvme_fabrics 00:08:41.379 rmmod nvme_keyring 00:08:41.379 06:34:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:41.379 06:34:45 -- nvmf/common.sh@124 -- # set -e 00:08:41.379 06:34:45 -- nvmf/common.sh@125 -- # return 0 00:08:41.379 06:34:45 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:08:41.379 06:34:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:41.379 06:34:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:41.380 06:34:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:41.380 06:34:45 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:41.380 06:34:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:41.380 06:34:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.380 06:34:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:41.380 06:34:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.916 06:34:47 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:43.916 00:08:43.916 real 0m29.611s 00:08:43.916 user 1m36.599s 00:08:43.916 sys 0m5.427s 00:08:43.916 06:34:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:43.916 06:34:47 -- common/autotest_common.sh@10 -- # set +x 00:08:43.916 ************************************ 00:08:43.916 END TEST nvmf_filesystem 00:08:43.916 ************************************ 00:08:43.916 06:34:47 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:43.916 06:34:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:43.916 06:34:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.916 06:34:47 -- common/autotest_common.sh@10 -- # set +x 00:08:43.916 ************************************ 00:08:43.916 START TEST nvmf_discovery 00:08:43.916 ************************************ 00:08:43.916 06:34:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:43.916 * Looking for test storage... 00:08:43.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:43.916 06:34:48 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:43.916 06:34:48 -- nvmf/common.sh@7 -- # uname -s 00:08:43.916 06:34:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.916 06:34:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.916 06:34:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.916 06:34:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.916 06:34:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.916 06:34:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.916 06:34:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.916 06:34:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.916 06:34:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.916 06:34:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.916 06:34:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:43.916 06:34:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:43.916 06:34:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.916 06:34:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.916 06:34:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:43.916 06:34:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.916 06:34:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:43.916 06:34:48 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.916 06:34:48 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.916 06:34:48 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.916 06:34:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.916 06:34:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.916 06:34:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.916 06:34:48 -- paths/export.sh@5 -- # export PATH 00:08:43.916 06:34:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.916 06:34:48 -- nvmf/common.sh@47 -- # : 0 00:08:43.916 06:34:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:43.916 06:34:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:43.916 06:34:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.916 06:34:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.916 06:34:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.916 06:34:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:43.916 06:34:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:43.916 06:34:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:43.916 06:34:48 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:43.916 06:34:48 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:43.916 06:34:48 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:43.916 06:34:48 -- target/discovery.sh@15 -- # hash nvme 00:08:43.916 06:34:48 -- target/discovery.sh@20 -- # nvmftestinit 00:08:43.916 06:34:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:43.916 06:34:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.916 06:34:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:43.916 06:34:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:43.916 06:34:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:43.916 06:34:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.916 06:34:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:43.916 06:34:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.916 06:34:48 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:43.916 06:34:48 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:43.916 06:34:48 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:43.916 06:34:48 -- common/autotest_common.sh@10 -- # set +x 00:08:45.819 06:34:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:45.819 06:34:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:45.819 06:34:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:45.819 06:34:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:45.819 06:34:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:45.819 06:34:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:45.819 06:34:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:45.819 06:34:50 -- nvmf/common.sh@295 -- # net_devs=() 00:08:45.819 06:34:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:45.819 06:34:50 -- nvmf/common.sh@296 -- # e810=() 00:08:45.819 06:34:50 -- nvmf/common.sh@296 -- # local -ga e810 00:08:45.819 06:34:50 -- nvmf/common.sh@297 -- # x722=() 00:08:45.819 06:34:50 -- nvmf/common.sh@297 -- # local -ga x722 00:08:45.819 06:34:50 -- nvmf/common.sh@298 -- # mlx=() 00:08:45.819 06:34:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:45.819 06:34:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.819 06:34:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.819 06:34:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.819 06:34:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.819 06:34:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.819 06:34:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.819 06:34:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.819 06:34:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.819 06:34:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.819 06:34:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.819 06:34:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.819 06:34:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:45.819 06:34:50 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:45.819 06:34:50 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:45.819 06:34:50 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:45.819 06:34:50 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:45.819 06:34:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:45.819 06:34:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:45.819 06:34:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:45.819 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:45.819 06:34:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:45.819 06:34:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:45.819 06:34:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.819 06:34:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.819 06:34:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:45.819 06:34:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:45.819 06:34:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:45.819 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:45.819 06:34:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:45.819 06:34:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:45.819 06:34:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.819 06:34:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.819 06:34:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:45.819 06:34:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:45.819 06:34:50 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:45.819 06:34:50 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:45.819 06:34:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:45.819 06:34:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.819 06:34:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:45.819 06:34:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.819 06:34:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:45.819 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:45.819 06:34:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.819 06:34:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:45.819 06:34:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.819 06:34:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:45.819 06:34:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.819 06:34:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:45.819 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:45.819 06:34:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.819 06:34:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:45.819 06:34:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:45.819 06:34:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:45.819 06:34:50 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:45.819 06:34:50 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:45.819 06:34:50 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.819 06:34:50 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.819 06:34:50 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:45.819 06:34:50 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:45.819 06:34:50 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:45.819 06:34:50 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:45.819 06:34:50 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:45.819 06:34:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:45.819 06:34:50 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.819 06:34:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:45.819 06:34:50 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:45.819 06:34:50 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:45.819 06:34:50 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:45.819 06:34:50 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:45.819 06:34:50 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:45.819 06:34:50 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:45.819 06:34:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:45.819 06:34:50 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:45.819 06:34:50 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:45.819 06:34:50 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:45.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:08:45.819 00:08:45.819 --- 10.0.0.2 ping statistics --- 00:08:45.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.819 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:08:45.819 06:34:50 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:45.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:08:45.819 00:08:45.819 --- 10.0.0.1 ping statistics --- 00:08:45.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.819 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:08:45.819 06:34:50 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.819 06:34:50 -- nvmf/common.sh@411 -- # return 0 00:08:45.819 06:34:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:45.819 06:34:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.819 06:34:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:45.819 06:34:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:45.819 06:34:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.819 06:34:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:45.819 06:34:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:45.819 06:34:50 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:45.819 06:34:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:45.819 06:34:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:45.819 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:45.819 06:34:50 -- nvmf/common.sh@470 -- # nvmfpid=4086732 00:08:45.819 06:34:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:45.819 06:34:50 -- nvmf/common.sh@471 -- # waitforlisten 4086732 00:08:45.819 06:34:50 -- common/autotest_common.sh@817 -- # '[' -z 4086732 ']' 00:08:45.819 06:34:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.819 06:34:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:45.819 06:34:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.819 06:34:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:45.819 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:45.819 [2024-04-17 06:34:50.403728] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:08:45.819 [2024-04-17 06:34:50.403804] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.077 EAL: No free 2048 kB hugepages reported on node 1 00:08:46.077 [2024-04-17 06:34:50.472255] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:46.077 [2024-04-17 06:34:50.560213] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.077 [2024-04-17 06:34:50.560278] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.077 [2024-04-17 06:34:50.560293] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:46.077 [2024-04-17 06:34:50.560305] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:46.077 [2024-04-17 06:34:50.560316] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.077 [2024-04-17 06:34:50.560375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.077 [2024-04-17 06:34:50.560437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.077 [2024-04-17 06:34:50.560480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:46.077 [2024-04-17 06:34:50.560482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.077 06:34:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:46.077 06:34:50 -- common/autotest_common.sh@850 -- # return 0 00:08:46.077 06:34:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:46.077 06:34:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:46.077 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:46.336 06:34:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.336 06:34:50 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:46.336 06:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.336 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:46.336 [2024-04-17 06:34:50.702740] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.336 06:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.336 06:34:50 -- target/discovery.sh@26 -- # seq 1 4 00:08:46.336 06:34:50 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:46.336 06:34:50 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:46.336 06:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.336 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:46.336 Null1 00:08:46.336 06:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.336 06:34:50 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:46.336 06:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.336 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:46.336 06:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.336 06:34:50 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:46.336 06:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.336 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:46.336 06:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.336 06:34:50 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:46.336 06:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.336 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:46.336 [2024-04-17 06:34:50.743012] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.336 06:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.336 06:34:50 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:46.336 06:34:50 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:46.336 06:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.336 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:46.336 Null2 00:08:46.336 06:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.336 06:34:50 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:46.336 06:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.336 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:46.336 06:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.336 06:34:50 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:46.336 06:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.336 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:46.336 06:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.336 06:34:50 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:46.336 06:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.336 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:46.336 06:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.336 06:34:50 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:46.336 06:34:50 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:46.336 06:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.336 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:46.336 Null3 00:08:46.336 06:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.336 06:34:50 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:46.336 06:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.336 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:46.336 06:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.336 06:34:50 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:46.336 06:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.336 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:46.336 06:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.336 06:34:50 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:46.336 06:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.336 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:46.336 06:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.336 06:34:50 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:46.336 06:34:50 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:46.336 06:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.336 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:46.336 Null4 00:08:46.336 06:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.336 06:34:50 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:46.336 06:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.336 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:46.336 06:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.336 06:34:50 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:46.336 06:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.336 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:46.336 06:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.336 06:34:50 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:46.336 06:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.336 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:46.336 06:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.336 06:34:50 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:46.336 06:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.336 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:46.336 06:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.336 06:34:50 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:46.336 06:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.336 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:46.336 06:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.336 06:34:50 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:08:46.336 00:08:46.336 Discovery Log Number of Records 6, Generation counter 6 00:08:46.336 =====Discovery Log Entry 0====== 00:08:46.336 trtype: tcp 00:08:46.336 adrfam: ipv4 00:08:46.336 subtype: current discovery subsystem 00:08:46.336 treq: not required 00:08:46.336 portid: 0 00:08:46.336 trsvcid: 4420 00:08:46.336 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:46.336 traddr: 10.0.0.2 00:08:46.336 eflags: explicit discovery connections, duplicate discovery information 00:08:46.336 sectype: none 00:08:46.336 =====Discovery Log Entry 1====== 00:08:46.336 trtype: tcp 00:08:46.336 adrfam: ipv4 00:08:46.336 subtype: nvme subsystem 00:08:46.336 treq: not required 00:08:46.336 portid: 0 00:08:46.336 trsvcid: 4420 00:08:46.336 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:46.336 traddr: 10.0.0.2 00:08:46.336 eflags: none 00:08:46.336 sectype: none 00:08:46.336 =====Discovery Log Entry 2====== 00:08:46.336 trtype: tcp 00:08:46.336 adrfam: ipv4 00:08:46.336 subtype: nvme subsystem 00:08:46.336 treq: not required 00:08:46.336 portid: 0 00:08:46.336 trsvcid: 4420 00:08:46.336 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:46.336 traddr: 10.0.0.2 00:08:46.336 eflags: none 00:08:46.336 sectype: none 00:08:46.336 =====Discovery Log Entry 3====== 00:08:46.336 trtype: tcp 00:08:46.336 adrfam: ipv4 00:08:46.336 subtype: nvme subsystem 00:08:46.336 treq: not required 00:08:46.336 portid: 0 00:08:46.336 trsvcid: 4420 00:08:46.336 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:46.336 traddr: 10.0.0.2 00:08:46.336 eflags: none 00:08:46.336 sectype: none 00:08:46.336 =====Discovery Log Entry 4====== 00:08:46.336 trtype: tcp 00:08:46.336 adrfam: ipv4 00:08:46.336 subtype: nvme subsystem 00:08:46.336 treq: not required 00:08:46.336 portid: 0 00:08:46.336 trsvcid: 4420 00:08:46.336 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:46.336 traddr: 10.0.0.2 00:08:46.337 eflags: none 00:08:46.337 sectype: none 00:08:46.337 =====Discovery Log Entry 5====== 00:08:46.337 trtype: tcp 00:08:46.337 adrfam: ipv4 00:08:46.337 subtype: discovery subsystem referral 00:08:46.337 treq: not required 00:08:46.337 portid: 0 00:08:46.337 trsvcid: 4430 00:08:46.337 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:46.337 traddr: 10.0.0.2 00:08:46.337 eflags: none 00:08:46.337 sectype: none 00:08:46.337 06:34:50 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:46.337 Perform nvmf subsystem discovery via RPC 00:08:46.337 06:34:50 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:46.337 06:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.337 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:46.337 [2024-04-17 06:34:50.927417] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:46.337 [ 00:08:46.337 { 00:08:46.337 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:46.337 "subtype": "Discovery", 00:08:46.337 "listen_addresses": [ 00:08:46.337 { 00:08:46.337 "transport": "TCP", 00:08:46.337 "trtype": "TCP", 00:08:46.337 "adrfam": "IPv4", 00:08:46.337 "traddr": "10.0.0.2", 00:08:46.337 "trsvcid": "4420" 00:08:46.337 } 00:08:46.337 ], 00:08:46.337 "allow_any_host": true, 00:08:46.337 "hosts": [] 00:08:46.337 }, 00:08:46.337 { 00:08:46.337 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:46.337 "subtype": "NVMe", 00:08:46.337 "listen_addresses": [ 00:08:46.337 { 00:08:46.337 "transport": "TCP", 00:08:46.337 "trtype": "TCP", 00:08:46.337 "adrfam": "IPv4", 00:08:46.337 "traddr": "10.0.0.2", 00:08:46.337 "trsvcid": "4420" 00:08:46.337 } 00:08:46.337 ], 00:08:46.337 "allow_any_host": true, 00:08:46.337 "hosts": [], 00:08:46.337 "serial_number": "SPDK00000000000001", 00:08:46.337 "model_number": "SPDK bdev Controller", 00:08:46.337 "max_namespaces": 32, 00:08:46.337 "min_cntlid": 1, 00:08:46.337 "max_cntlid": 65519, 00:08:46.337 "namespaces": [ 00:08:46.337 { 00:08:46.337 "nsid": 1, 00:08:46.337 "bdev_name": "Null1", 00:08:46.337 "name": "Null1", 00:08:46.337 "nguid": "EC9FF8DB83FB464E9E8CBCA5586D86CC", 00:08:46.337 "uuid": "ec9ff8db-83fb-464e-9e8c-bca5586d86cc" 00:08:46.337 } 00:08:46.337 ] 00:08:46.337 }, 00:08:46.337 { 00:08:46.337 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:46.337 "subtype": "NVMe", 00:08:46.337 "listen_addresses": [ 00:08:46.337 { 00:08:46.337 "transport": "TCP", 00:08:46.337 "trtype": "TCP", 00:08:46.337 "adrfam": "IPv4", 00:08:46.337 "traddr": "10.0.0.2", 00:08:46.337 "trsvcid": "4420" 00:08:46.337 } 00:08:46.337 ], 00:08:46.337 "allow_any_host": true, 00:08:46.337 "hosts": [], 00:08:46.337 "serial_number": "SPDK00000000000002", 00:08:46.337 "model_number": "SPDK bdev Controller", 00:08:46.337 "max_namespaces": 32, 00:08:46.337 "min_cntlid": 1, 00:08:46.337 "max_cntlid": 65519, 00:08:46.337 "namespaces": [ 00:08:46.337 { 00:08:46.337 "nsid": 1, 00:08:46.337 "bdev_name": "Null2", 00:08:46.337 "name": "Null2", 00:08:46.337 "nguid": "780C1203E8D4461DAF48EA8D89378963", 00:08:46.337 "uuid": "780c1203-e8d4-461d-af48-ea8d89378963" 00:08:46.337 } 00:08:46.337 ] 00:08:46.337 }, 00:08:46.337 { 00:08:46.337 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:46.337 "subtype": "NVMe", 00:08:46.337 "listen_addresses": [ 00:08:46.337 { 00:08:46.337 "transport": "TCP", 00:08:46.337 "trtype": "TCP", 00:08:46.337 "adrfam": "IPv4", 00:08:46.337 "traddr": "10.0.0.2", 00:08:46.337 "trsvcid": "4420" 00:08:46.337 } 00:08:46.337 ], 00:08:46.337 "allow_any_host": true, 00:08:46.337 "hosts": [], 00:08:46.337 "serial_number": "SPDK00000000000003", 00:08:46.337 "model_number": "SPDK bdev Controller", 00:08:46.595 "max_namespaces": 32, 00:08:46.595 "min_cntlid": 1, 00:08:46.595 "max_cntlid": 65519, 00:08:46.595 "namespaces": [ 00:08:46.595 { 00:08:46.595 "nsid": 1, 00:08:46.595 "bdev_name": "Null3", 00:08:46.595 "name": "Null3", 00:08:46.595 "nguid": "B87E66B60A714D978DFF9E05C273E07B", 00:08:46.595 "uuid": "b87e66b6-0a71-4d97-8dff-9e05c273e07b" 00:08:46.595 } 00:08:46.595 ] 00:08:46.595 }, 00:08:46.595 { 00:08:46.595 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:46.595 "subtype": "NVMe", 00:08:46.595 "listen_addresses": [ 00:08:46.595 { 00:08:46.595 "transport": "TCP", 00:08:46.595 "trtype": "TCP", 00:08:46.595 "adrfam": "IPv4", 00:08:46.595 "traddr": "10.0.0.2", 00:08:46.595 "trsvcid": "4420" 00:08:46.595 } 00:08:46.595 ], 00:08:46.595 "allow_any_host": true, 00:08:46.595 "hosts": [], 00:08:46.595 "serial_number": "SPDK00000000000004", 00:08:46.595 "model_number": "SPDK bdev Controller", 00:08:46.595 "max_namespaces": 32, 00:08:46.595 "min_cntlid": 1, 00:08:46.595 "max_cntlid": 65519, 00:08:46.595 "namespaces": [ 00:08:46.595 { 00:08:46.595 "nsid": 1, 00:08:46.595 "bdev_name": "Null4", 00:08:46.595 "name": "Null4", 00:08:46.595 "nguid": "FDA7352050E2489E9467F5BCE7BC664A", 00:08:46.595 "uuid": "fda73520-50e2-489e-9467-f5bce7bc664a" 00:08:46.595 } 00:08:46.595 ] 00:08:46.595 } 00:08:46.595 ] 00:08:46.595 06:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.595 06:34:50 -- target/discovery.sh@42 -- # seq 1 4 00:08:46.595 06:34:50 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:46.595 06:34:50 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:46.596 06:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.596 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:46.596 06:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.596 06:34:50 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:46.596 06:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.596 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:46.596 06:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.596 06:34:50 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:46.596 06:34:50 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:46.596 06:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.596 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:46.596 06:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.596 06:34:50 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:46.596 06:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.596 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:46.596 06:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.596 06:34:50 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:46.596 06:34:50 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:46.596 06:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.596 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:46.596 06:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.596 06:34:50 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:46.596 06:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.596 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:46.596 06:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.596 06:34:50 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:46.596 06:34:51 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:46.596 06:34:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.596 06:34:51 -- common/autotest_common.sh@10 -- # set +x 00:08:46.596 06:34:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.596 06:34:51 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:46.596 06:34:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.596 06:34:51 -- common/autotest_common.sh@10 -- # set +x 00:08:46.596 06:34:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.596 06:34:51 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:46.596 06:34:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.596 06:34:51 -- common/autotest_common.sh@10 -- # set +x 00:08:46.596 06:34:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.596 06:34:51 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:46.596 06:34:51 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:46.596 06:34:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.596 06:34:51 -- common/autotest_common.sh@10 -- # set +x 00:08:46.596 06:34:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.596 06:34:51 -- target/discovery.sh@49 -- # check_bdevs= 00:08:46.596 06:34:51 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:46.596 06:34:51 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:46.596 06:34:51 -- target/discovery.sh@57 -- # nvmftestfini 00:08:46.596 06:34:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:46.596 06:34:51 -- nvmf/common.sh@117 -- # sync 00:08:46.596 06:34:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:46.596 06:34:51 -- nvmf/common.sh@120 -- # set +e 00:08:46.596 06:34:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:46.596 06:34:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:46.596 rmmod nvme_tcp 00:08:46.596 rmmod nvme_fabrics 00:08:46.596 rmmod nvme_keyring 00:08:46.596 06:34:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:46.596 06:34:51 -- nvmf/common.sh@124 -- # set -e 00:08:46.596 06:34:51 -- nvmf/common.sh@125 -- # return 0 00:08:46.596 06:34:51 -- nvmf/common.sh@478 -- # '[' -n 4086732 ']' 00:08:46.596 06:34:51 -- nvmf/common.sh@479 -- # killprocess 4086732 00:08:46.596 06:34:51 -- common/autotest_common.sh@936 -- # '[' -z 4086732 ']' 00:08:46.596 06:34:51 -- common/autotest_common.sh@940 -- # kill -0 4086732 00:08:46.596 06:34:51 -- common/autotest_common.sh@941 -- # uname 00:08:46.596 06:34:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:46.596 06:34:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4086732 00:08:46.596 06:34:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:46.596 06:34:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:46.596 06:34:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4086732' 00:08:46.596 killing process with pid 4086732 00:08:46.596 06:34:51 -- common/autotest_common.sh@955 -- # kill 4086732 00:08:46.596 [2024-04-17 06:34:51.145450] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:46.596 06:34:51 -- common/autotest_common.sh@960 -- # wait 4086732 00:08:46.855 06:34:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:46.855 06:34:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:46.855 06:34:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:46.855 06:34:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:46.855 06:34:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:46.855 06:34:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.855 06:34:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:46.855 06:34:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.426 06:34:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:49.426 00:08:49.426 real 0m5.337s 00:08:49.426 user 0m4.052s 00:08:49.426 sys 0m1.841s 00:08:49.426 06:34:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:49.426 06:34:53 -- common/autotest_common.sh@10 -- # set +x 00:08:49.426 ************************************ 00:08:49.426 END TEST nvmf_discovery 00:08:49.426 ************************************ 00:08:49.426 06:34:53 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:49.426 06:34:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:49.426 06:34:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:49.426 06:34:53 -- common/autotest_common.sh@10 -- # set +x 00:08:49.426 ************************************ 00:08:49.426 START TEST nvmf_referrals 00:08:49.426 ************************************ 00:08:49.426 06:34:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:49.426 * Looking for test storage... 00:08:49.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:49.426 06:34:53 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:49.426 06:34:53 -- nvmf/common.sh@7 -- # uname -s 00:08:49.426 06:34:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.426 06:34:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.426 06:34:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.426 06:34:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.426 06:34:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.426 06:34:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.426 06:34:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.426 06:34:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.426 06:34:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.426 06:34:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.426 06:34:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:49.426 06:34:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:49.426 06:34:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.426 06:34:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.426 06:34:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:49.426 06:34:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.426 06:34:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:49.426 06:34:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.426 06:34:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.426 06:34:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.426 06:34:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.426 06:34:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.426 06:34:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.426 06:34:53 -- paths/export.sh@5 -- # export PATH 00:08:49.426 06:34:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.426 06:34:53 -- nvmf/common.sh@47 -- # : 0 00:08:49.426 06:34:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:49.426 06:34:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:49.426 06:34:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.426 06:34:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.426 06:34:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.426 06:34:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:49.426 06:34:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:49.426 06:34:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:49.426 06:34:53 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:49.426 06:34:53 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:49.426 06:34:53 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:49.426 06:34:53 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:49.426 06:34:53 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:49.426 06:34:53 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:49.426 06:34:53 -- target/referrals.sh@37 -- # nvmftestinit 00:08:49.426 06:34:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:49.426 06:34:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.426 06:34:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:49.426 06:34:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:49.426 06:34:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:49.426 06:34:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.426 06:34:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:49.426 06:34:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.426 06:34:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:49.426 06:34:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:49.426 06:34:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:49.426 06:34:53 -- common/autotest_common.sh@10 -- # set +x 00:08:51.328 06:34:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:51.328 06:34:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:51.328 06:34:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:51.328 06:34:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:51.328 06:34:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:51.328 06:34:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:51.328 06:34:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:51.328 06:34:55 -- nvmf/common.sh@295 -- # net_devs=() 00:08:51.328 06:34:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:51.328 06:34:55 -- nvmf/common.sh@296 -- # e810=() 00:08:51.328 06:34:55 -- nvmf/common.sh@296 -- # local -ga e810 00:08:51.328 06:34:55 -- nvmf/common.sh@297 -- # x722=() 00:08:51.328 06:34:55 -- nvmf/common.sh@297 -- # local -ga x722 00:08:51.328 06:34:55 -- nvmf/common.sh@298 -- # mlx=() 00:08:51.328 06:34:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:51.328 06:34:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:51.328 06:34:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:51.328 06:34:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:51.328 06:34:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:51.328 06:34:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:51.328 06:34:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:51.328 06:34:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:51.328 06:34:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:51.328 06:34:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:51.328 06:34:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:51.328 06:34:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:51.328 06:34:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:51.328 06:34:55 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:51.328 06:34:55 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:51.328 06:34:55 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:51.328 06:34:55 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:51.328 06:34:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:51.328 06:34:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:51.328 06:34:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:51.328 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:51.328 06:34:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:51.328 06:34:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:51.328 06:34:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.328 06:34:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.328 06:34:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:51.328 06:34:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:51.328 06:34:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:51.328 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:51.328 06:34:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:51.328 06:34:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:51.328 06:34:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.328 06:34:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.328 06:34:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:51.328 06:34:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:51.328 06:34:55 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:51.328 06:34:55 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:51.328 06:34:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:51.328 06:34:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.328 06:34:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:51.328 06:34:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.328 06:34:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:51.328 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:51.328 06:34:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.328 06:34:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:51.328 06:34:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.328 06:34:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:51.328 06:34:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.328 06:34:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:51.328 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:51.328 06:34:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.328 06:34:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:51.328 06:34:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:51.328 06:34:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:51.328 06:34:55 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:51.328 06:34:55 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:51.328 06:34:55 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.328 06:34:55 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:51.328 06:34:55 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:51.328 06:34:55 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:51.328 06:34:55 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:51.328 06:34:55 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:51.328 06:34:55 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:51.328 06:34:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:51.328 06:34:55 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.328 06:34:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:51.328 06:34:55 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:51.328 06:34:55 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:51.328 06:34:55 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:51.328 06:34:55 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:51.328 06:34:55 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:51.328 06:34:55 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:51.328 06:34:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:51.328 06:34:55 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:51.328 06:34:55 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:51.328 06:34:55 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:51.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:08:51.328 00:08:51.328 --- 10.0.0.2 ping statistics --- 00:08:51.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.328 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:08:51.328 06:34:55 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:51.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:08:51.328 00:08:51.328 --- 10.0.0.1 ping statistics --- 00:08:51.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.328 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:08:51.328 06:34:55 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.328 06:34:55 -- nvmf/common.sh@411 -- # return 0 00:08:51.328 06:34:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:51.328 06:34:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.328 06:34:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:51.328 06:34:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:51.328 06:34:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.328 06:34:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:51.328 06:34:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:51.328 06:34:55 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:51.328 06:34:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:51.328 06:34:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:51.328 06:34:55 -- common/autotest_common.sh@10 -- # set +x 00:08:51.328 06:34:55 -- nvmf/common.sh@470 -- # nvmfpid=4088838 00:08:51.328 06:34:55 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:51.328 06:34:55 -- nvmf/common.sh@471 -- # waitforlisten 4088838 00:08:51.328 06:34:55 -- common/autotest_common.sh@817 -- # '[' -z 4088838 ']' 00:08:51.328 06:34:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.328 06:34:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:51.328 06:34:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.328 06:34:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:51.328 06:34:55 -- common/autotest_common.sh@10 -- # set +x 00:08:51.328 [2024-04-17 06:34:55.768852] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:08:51.328 [2024-04-17 06:34:55.768929] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.328 EAL: No free 2048 kB hugepages reported on node 1 00:08:51.329 [2024-04-17 06:34:55.833715] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:51.329 [2024-04-17 06:34:55.921876] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.329 [2024-04-17 06:34:55.921934] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.329 [2024-04-17 06:34:55.921961] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.329 [2024-04-17 06:34:55.921972] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.329 [2024-04-17 06:34:55.921982] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.329 [2024-04-17 06:34:55.922062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.329 [2024-04-17 06:34:55.922127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.329 [2024-04-17 06:34:55.922156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:51.329 [2024-04-17 06:34:55.922158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.587 06:34:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:51.587 06:34:56 -- common/autotest_common.sh@850 -- # return 0 00:08:51.587 06:34:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:51.587 06:34:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:51.587 06:34:56 -- common/autotest_common.sh@10 -- # set +x 00:08:51.587 06:34:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.587 06:34:56 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:51.587 06:34:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.587 06:34:56 -- common/autotest_common.sh@10 -- # set +x 00:08:51.587 [2024-04-17 06:34:56.081979] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.587 06:34:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.587 06:34:56 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:51.587 06:34:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.587 06:34:56 -- common/autotest_common.sh@10 -- # set +x 00:08:51.587 [2024-04-17 06:34:56.094215] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:51.587 06:34:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.587 06:34:56 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:51.587 06:34:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.587 06:34:56 -- common/autotest_common.sh@10 -- # set +x 00:08:51.587 06:34:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.587 06:34:56 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:51.587 06:34:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.587 06:34:56 -- common/autotest_common.sh@10 -- # set +x 00:08:51.587 06:34:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.587 06:34:56 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:51.587 06:34:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.587 06:34:56 -- common/autotest_common.sh@10 -- # set +x 00:08:51.587 06:34:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.587 06:34:56 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:51.587 06:34:56 -- target/referrals.sh@48 -- # jq length 00:08:51.587 06:34:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.587 06:34:56 -- common/autotest_common.sh@10 -- # set +x 00:08:51.587 06:34:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.587 06:34:56 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:51.587 06:34:56 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:51.587 06:34:56 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:51.587 06:34:56 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:51.587 06:34:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.587 06:34:56 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:51.587 06:34:56 -- common/autotest_common.sh@10 -- # set +x 00:08:51.587 06:34:56 -- target/referrals.sh@21 -- # sort 00:08:51.587 06:34:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.845 06:34:56 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:51.845 06:34:56 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:51.845 06:34:56 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:51.845 06:34:56 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:51.845 06:34:56 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:51.845 06:34:56 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:51.845 06:34:56 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:51.845 06:34:56 -- target/referrals.sh@26 -- # sort 00:08:51.845 06:34:56 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:51.845 06:34:56 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:51.845 06:34:56 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:51.845 06:34:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.845 06:34:56 -- common/autotest_common.sh@10 -- # set +x 00:08:51.845 06:34:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.845 06:34:56 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:51.845 06:34:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.845 06:34:56 -- common/autotest_common.sh@10 -- # set +x 00:08:51.845 06:34:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.845 06:34:56 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:51.845 06:34:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.845 06:34:56 -- common/autotest_common.sh@10 -- # set +x 00:08:51.845 06:34:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.845 06:34:56 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:51.845 06:34:56 -- target/referrals.sh@56 -- # jq length 00:08:51.845 06:34:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.845 06:34:56 -- common/autotest_common.sh@10 -- # set +x 00:08:51.845 06:34:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.845 06:34:56 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:51.845 06:34:56 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:51.845 06:34:56 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:51.845 06:34:56 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:51.845 06:34:56 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:51.845 06:34:56 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:51.845 06:34:56 -- target/referrals.sh@26 -- # sort 00:08:52.103 06:34:56 -- target/referrals.sh@26 -- # echo 00:08:52.103 06:34:56 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:52.103 06:34:56 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:52.103 06:34:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:52.103 06:34:56 -- common/autotest_common.sh@10 -- # set +x 00:08:52.103 06:34:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:52.103 06:34:56 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:52.103 06:34:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:52.103 06:34:56 -- common/autotest_common.sh@10 -- # set +x 00:08:52.103 06:34:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:52.103 06:34:56 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:52.103 06:34:56 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:52.103 06:34:56 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:52.103 06:34:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:52.103 06:34:56 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:52.104 06:34:56 -- common/autotest_common.sh@10 -- # set +x 00:08:52.104 06:34:56 -- target/referrals.sh@21 -- # sort 00:08:52.104 06:34:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:52.104 06:34:56 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:52.104 06:34:56 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:52.104 06:34:56 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:52.104 06:34:56 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:52.104 06:34:56 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:52.104 06:34:56 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:52.104 06:34:56 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:52.104 06:34:56 -- target/referrals.sh@26 -- # sort 00:08:52.104 06:34:56 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:52.104 06:34:56 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:52.104 06:34:56 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:52.104 06:34:56 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:52.104 06:34:56 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:52.104 06:34:56 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:52.104 06:34:56 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:52.362 06:34:56 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:52.362 06:34:56 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:52.362 06:34:56 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:52.362 06:34:56 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:52.362 06:34:56 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:52.362 06:34:56 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:52.362 06:34:56 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:52.362 06:34:56 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:52.362 06:34:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:52.362 06:34:56 -- common/autotest_common.sh@10 -- # set +x 00:08:52.362 06:34:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:52.362 06:34:56 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:52.362 06:34:56 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:52.362 06:34:56 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:52.362 06:34:56 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:52.362 06:34:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:52.362 06:34:56 -- target/referrals.sh@21 -- # sort 00:08:52.362 06:34:56 -- common/autotest_common.sh@10 -- # set +x 00:08:52.362 06:34:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:52.362 06:34:56 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:52.362 06:34:56 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:52.362 06:34:56 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:52.362 06:34:56 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:52.362 06:34:56 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:52.362 06:34:56 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:52.362 06:34:56 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:52.362 06:34:56 -- target/referrals.sh@26 -- # sort 00:08:52.619 06:34:57 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:52.619 06:34:57 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:52.619 06:34:57 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:52.619 06:34:57 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:52.619 06:34:57 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:52.619 06:34:57 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:52.619 06:34:57 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:52.619 06:34:57 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:52.619 06:34:57 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:52.620 06:34:57 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:52.620 06:34:57 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:52.620 06:34:57 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:52.620 06:34:57 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:52.877 06:34:57 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:52.877 06:34:57 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:52.877 06:34:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:52.877 06:34:57 -- common/autotest_common.sh@10 -- # set +x 00:08:52.877 06:34:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:52.877 06:34:57 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:52.877 06:34:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:52.877 06:34:57 -- target/referrals.sh@82 -- # jq length 00:08:52.877 06:34:57 -- common/autotest_common.sh@10 -- # set +x 00:08:52.877 06:34:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:52.877 06:34:57 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:52.877 06:34:57 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:52.877 06:34:57 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:52.877 06:34:57 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:52.877 06:34:57 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:52.877 06:34:57 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:52.877 06:34:57 -- target/referrals.sh@26 -- # sort 00:08:52.877 06:34:57 -- target/referrals.sh@26 -- # echo 00:08:52.877 06:34:57 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:52.878 06:34:57 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:52.878 06:34:57 -- target/referrals.sh@86 -- # nvmftestfini 00:08:52.878 06:34:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:52.878 06:34:57 -- nvmf/common.sh@117 -- # sync 00:08:52.878 06:34:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:52.878 06:34:57 -- nvmf/common.sh@120 -- # set +e 00:08:52.878 06:34:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:52.878 06:34:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:52.878 rmmod nvme_tcp 00:08:52.878 rmmod nvme_fabrics 00:08:52.878 rmmod nvme_keyring 00:08:52.878 06:34:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:52.878 06:34:57 -- nvmf/common.sh@124 -- # set -e 00:08:52.878 06:34:57 -- nvmf/common.sh@125 -- # return 0 00:08:52.878 06:34:57 -- nvmf/common.sh@478 -- # '[' -n 4088838 ']' 00:08:52.878 06:34:57 -- nvmf/common.sh@479 -- # killprocess 4088838 00:08:52.878 06:34:57 -- common/autotest_common.sh@936 -- # '[' -z 4088838 ']' 00:08:52.878 06:34:57 -- common/autotest_common.sh@940 -- # kill -0 4088838 00:08:52.878 06:34:57 -- common/autotest_common.sh@941 -- # uname 00:08:52.878 06:34:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:52.878 06:34:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4088838 00:08:52.878 06:34:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:52.878 06:34:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:52.878 06:34:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4088838' 00:08:52.878 killing process with pid 4088838 00:08:52.878 06:34:57 -- common/autotest_common.sh@955 -- # kill 4088838 00:08:52.878 06:34:57 -- common/autotest_common.sh@960 -- # wait 4088838 00:08:53.136 06:34:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:53.136 06:34:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:53.136 06:34:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:53.136 06:34:57 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:53.137 06:34:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:53.137 06:34:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.137 06:34:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.137 06:34:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.670 06:34:59 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:55.670 00:08:55.670 real 0m6.211s 00:08:55.670 user 0m8.435s 00:08:55.670 sys 0m1.902s 00:08:55.670 06:34:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:55.670 06:34:59 -- common/autotest_common.sh@10 -- # set +x 00:08:55.670 ************************************ 00:08:55.670 END TEST nvmf_referrals 00:08:55.670 ************************************ 00:08:55.670 06:34:59 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:55.670 06:34:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:55.670 06:34:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:55.670 06:34:59 -- common/autotest_common.sh@10 -- # set +x 00:08:55.670 ************************************ 00:08:55.670 START TEST nvmf_connect_disconnect 00:08:55.670 ************************************ 00:08:55.670 06:34:59 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:55.670 * Looking for test storage... 00:08:55.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:55.670 06:34:59 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:55.670 06:34:59 -- nvmf/common.sh@7 -- # uname -s 00:08:55.670 06:34:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:55.670 06:34:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:55.670 06:34:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:55.670 06:34:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:55.671 06:34:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:55.671 06:34:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:55.671 06:34:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:55.671 06:34:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:55.671 06:34:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:55.671 06:34:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:55.671 06:34:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:55.671 06:34:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:55.671 06:34:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:55.671 06:34:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:55.671 06:34:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:55.671 06:34:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:55.671 06:34:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:55.671 06:34:59 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.671 06:34:59 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.671 06:34:59 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.671 06:34:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.671 06:34:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.671 06:34:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.671 06:34:59 -- paths/export.sh@5 -- # export PATH 00:08:55.671 06:34:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.671 06:34:59 -- nvmf/common.sh@47 -- # : 0 00:08:55.671 06:34:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:55.671 06:34:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:55.671 06:34:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:55.671 06:34:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:55.671 06:34:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:55.671 06:34:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:55.671 06:34:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:55.671 06:34:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:55.671 06:34:59 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:55.671 06:34:59 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:55.671 06:34:59 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:55.671 06:34:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:55.671 06:34:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:55.671 06:34:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:55.671 06:34:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:55.671 06:34:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:55.671 06:34:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.671 06:34:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:55.671 06:34:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.671 06:34:59 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:55.671 06:34:59 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:55.671 06:34:59 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:55.671 06:34:59 -- common/autotest_common.sh@10 -- # set +x 00:08:57.575 06:35:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:57.575 06:35:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:57.575 06:35:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:57.575 06:35:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:57.575 06:35:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:57.575 06:35:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:57.575 06:35:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:57.575 06:35:01 -- nvmf/common.sh@295 -- # net_devs=() 00:08:57.575 06:35:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:57.575 06:35:01 -- nvmf/common.sh@296 -- # e810=() 00:08:57.575 06:35:01 -- nvmf/common.sh@296 -- # local -ga e810 00:08:57.575 06:35:01 -- nvmf/common.sh@297 -- # x722=() 00:08:57.575 06:35:01 -- nvmf/common.sh@297 -- # local -ga x722 00:08:57.575 06:35:01 -- nvmf/common.sh@298 -- # mlx=() 00:08:57.575 06:35:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:57.575 06:35:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:57.575 06:35:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:57.575 06:35:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:57.575 06:35:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:57.575 06:35:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:57.575 06:35:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:57.575 06:35:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:57.575 06:35:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:57.575 06:35:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:57.575 06:35:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:57.575 06:35:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:57.575 06:35:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:57.575 06:35:01 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:57.575 06:35:01 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:57.575 06:35:01 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:57.575 06:35:01 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:57.575 06:35:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:57.575 06:35:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:57.575 06:35:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:57.575 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:57.575 06:35:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:57.575 06:35:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:57.575 06:35:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.575 06:35:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.575 06:35:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:57.575 06:35:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:57.575 06:35:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:57.575 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:57.575 06:35:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:57.575 06:35:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:57.575 06:35:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.575 06:35:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.575 06:35:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:57.575 06:35:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:57.575 06:35:01 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:57.575 06:35:01 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:57.575 06:35:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:57.575 06:35:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.575 06:35:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:57.575 06:35:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.575 06:35:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:57.575 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:57.575 06:35:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.575 06:35:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:57.575 06:35:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.575 06:35:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:57.575 06:35:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.575 06:35:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:57.575 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:57.575 06:35:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.575 06:35:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:57.575 06:35:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:57.575 06:35:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:57.575 06:35:01 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:57.575 06:35:01 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:57.575 06:35:01 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:57.575 06:35:01 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:57.575 06:35:01 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:57.575 06:35:01 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:57.575 06:35:01 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:57.575 06:35:01 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:57.575 06:35:01 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:57.575 06:35:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:57.575 06:35:01 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:57.575 06:35:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:57.575 06:35:01 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:57.575 06:35:01 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:57.575 06:35:01 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:57.575 06:35:01 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:57.575 06:35:01 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:57.575 06:35:01 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:57.575 06:35:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:57.575 06:35:01 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:57.575 06:35:02 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:57.575 06:35:02 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:57.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:57.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:08:57.575 00:08:57.575 --- 10.0.0.2 ping statistics --- 00:08:57.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.575 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:08:57.575 06:35:02 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:57.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:57.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:08:57.575 00:08:57.575 --- 10.0.0.1 ping statistics --- 00:08:57.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.575 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:08:57.575 06:35:02 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:57.575 06:35:02 -- nvmf/common.sh@411 -- # return 0 00:08:57.575 06:35:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:57.575 06:35:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:57.575 06:35:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:57.575 06:35:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:57.575 06:35:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:57.575 06:35:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:57.575 06:35:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:57.575 06:35:02 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:57.575 06:35:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:57.575 06:35:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:57.575 06:35:02 -- common/autotest_common.sh@10 -- # set +x 00:08:57.575 06:35:02 -- nvmf/common.sh@470 -- # nvmfpid=4091126 00:08:57.575 06:35:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:57.575 06:35:02 -- nvmf/common.sh@471 -- # waitforlisten 4091126 00:08:57.575 06:35:02 -- common/autotest_common.sh@817 -- # '[' -z 4091126 ']' 00:08:57.575 06:35:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.575 06:35:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:57.575 06:35:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.575 06:35:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:57.575 06:35:02 -- common/autotest_common.sh@10 -- # set +x 00:08:57.575 [2024-04-17 06:35:02.089303] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:08:57.575 [2024-04-17 06:35:02.089381] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.575 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.575 [2024-04-17 06:35:02.153676] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:57.834 [2024-04-17 06:35:02.240084] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.834 [2024-04-17 06:35:02.240139] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.834 [2024-04-17 06:35:02.240167] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:57.834 [2024-04-17 06:35:02.240185] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:57.834 [2024-04-17 06:35:02.240196] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.834 [2024-04-17 06:35:02.240270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.834 [2024-04-17 06:35:02.240688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.834 [2024-04-17 06:35:02.240749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.834 [2024-04-17 06:35:02.240746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:57.834 06:35:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:57.834 06:35:02 -- common/autotest_common.sh@850 -- # return 0 00:08:57.834 06:35:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:57.834 06:35:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:57.834 06:35:02 -- common/autotest_common.sh@10 -- # set +x 00:08:57.834 06:35:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.834 06:35:02 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:57.834 06:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.834 06:35:02 -- common/autotest_common.sh@10 -- # set +x 00:08:57.834 [2024-04-17 06:35:02.400986] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.834 06:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.834 06:35:02 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:57.834 06:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.834 06:35:02 -- common/autotest_common.sh@10 -- # set +x 00:08:57.834 06:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.834 06:35:02 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:57.834 06:35:02 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:57.834 06:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.834 06:35:02 -- common/autotest_common.sh@10 -- # set +x 00:08:58.092 06:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:58.092 06:35:02 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:58.092 06:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:58.092 06:35:02 -- common/autotest_common.sh@10 -- # set +x 00:08:58.092 06:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:58.092 06:35:02 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:58.092 06:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:58.092 06:35:02 -- common/autotest_common.sh@10 -- # set +x 00:08:58.092 [2024-04-17 06:35:02.458509] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:58.092 06:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:58.092 06:35:02 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:58.092 06:35:02 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:58.092 06:35:02 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:58.092 06:35:02 -- target/connect_disconnect.sh@34 -- # set +x 00:09:00.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.522 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.979 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.387 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.654 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.788 06:38:48 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:44.788 06:38:48 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:44.788 06:38:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:44.788 06:38:48 -- nvmf/common.sh@117 -- # sync 00:12:44.788 06:38:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:44.788 06:38:48 -- nvmf/common.sh@120 -- # set +e 00:12:44.788 06:38:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:44.788 06:38:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:44.788 rmmod nvme_tcp 00:12:44.788 rmmod nvme_fabrics 00:12:44.788 rmmod nvme_keyring 00:12:44.788 06:38:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:44.788 06:38:48 -- nvmf/common.sh@124 -- # set -e 00:12:44.788 06:38:48 -- nvmf/common.sh@125 -- # return 0 00:12:44.788 06:38:48 -- nvmf/common.sh@478 -- # '[' -n 4091126 ']' 00:12:44.788 06:38:48 -- nvmf/common.sh@479 -- # killprocess 4091126 00:12:44.789 06:38:48 -- common/autotest_common.sh@936 -- # '[' -z 4091126 ']' 00:12:44.789 06:38:48 -- common/autotest_common.sh@940 -- # kill -0 4091126 00:12:44.789 06:38:48 -- common/autotest_common.sh@941 -- # uname 00:12:44.789 06:38:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:44.789 06:38:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4091126 00:12:44.789 06:38:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:44.789 06:38:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:44.789 06:38:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4091126' 00:12:44.789 killing process with pid 4091126 00:12:44.789 06:38:48 -- common/autotest_common.sh@955 -- # kill 4091126 00:12:44.789 06:38:48 -- common/autotest_common.sh@960 -- # wait 4091126 00:12:44.789 06:38:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:44.789 06:38:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:44.789 06:38:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:44.789 06:38:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:44.789 06:38:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:44.789 06:38:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.789 06:38:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:44.789 06:38:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.690 06:38:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:46.690 00:12:46.690 real 3m51.282s 00:12:46.690 user 14m40.372s 00:12:46.690 sys 0m31.473s 00:12:46.690 06:38:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:46.690 06:38:51 -- common/autotest_common.sh@10 -- # set +x 00:12:46.690 ************************************ 00:12:46.690 END TEST nvmf_connect_disconnect 00:12:46.690 ************************************ 00:12:46.690 06:38:51 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:46.690 06:38:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:46.690 06:38:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:46.690 06:38:51 -- common/autotest_common.sh@10 -- # set +x 00:12:46.690 ************************************ 00:12:46.690 START TEST nvmf_multitarget 00:12:46.690 ************************************ 00:12:46.690 06:38:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:46.948 * Looking for test storage... 00:12:46.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:46.948 06:38:51 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:46.948 06:38:51 -- nvmf/common.sh@7 -- # uname -s 00:12:46.948 06:38:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:46.948 06:38:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:46.948 06:38:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:46.948 06:38:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:46.948 06:38:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:46.948 06:38:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:46.948 06:38:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:46.948 06:38:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:46.948 06:38:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:46.948 06:38:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.948 06:38:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:46.948 06:38:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:46.948 06:38:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.948 06:38:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.948 06:38:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:46.948 06:38:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:46.948 06:38:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:46.948 06:38:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.948 06:38:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.948 06:38:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.948 06:38:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.948 06:38:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.948 06:38:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.948 06:38:51 -- paths/export.sh@5 -- # export PATH 00:12:46.948 06:38:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.948 06:38:51 -- nvmf/common.sh@47 -- # : 0 00:12:46.948 06:38:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:46.948 06:38:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:46.948 06:38:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:46.948 06:38:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.948 06:38:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.949 06:38:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:46.949 06:38:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:46.949 06:38:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:46.949 06:38:51 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:46.949 06:38:51 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:46.949 06:38:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:46.949 06:38:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:46.949 06:38:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:46.949 06:38:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:46.949 06:38:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:46.949 06:38:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.949 06:38:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:46.949 06:38:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.949 06:38:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:46.949 06:38:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:46.949 06:38:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:46.949 06:38:51 -- common/autotest_common.sh@10 -- # set +x 00:12:48.849 06:38:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:48.849 06:38:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:48.849 06:38:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:48.849 06:38:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:48.849 06:38:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:48.849 06:38:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:48.849 06:38:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:48.849 06:38:53 -- nvmf/common.sh@295 -- # net_devs=() 00:12:48.849 06:38:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:48.849 06:38:53 -- nvmf/common.sh@296 -- # e810=() 00:12:48.849 06:38:53 -- nvmf/common.sh@296 -- # local -ga e810 00:12:48.849 06:38:53 -- nvmf/common.sh@297 -- # x722=() 00:12:48.849 06:38:53 -- nvmf/common.sh@297 -- # local -ga x722 00:12:48.849 06:38:53 -- nvmf/common.sh@298 -- # mlx=() 00:12:48.849 06:38:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:48.849 06:38:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:48.849 06:38:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:48.849 06:38:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:48.849 06:38:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:48.849 06:38:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:48.849 06:38:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:48.849 06:38:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:48.849 06:38:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:48.849 06:38:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:48.849 06:38:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:48.849 06:38:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:48.849 06:38:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:48.849 06:38:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:48.849 06:38:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:48.849 06:38:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:48.849 06:38:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:48.849 06:38:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:48.849 06:38:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:48.849 06:38:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:48.849 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:48.849 06:38:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:48.849 06:38:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:48.849 06:38:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.849 06:38:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.849 06:38:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:48.849 06:38:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:48.849 06:38:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:48.849 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:48.849 06:38:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:48.849 06:38:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:48.849 06:38:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.849 06:38:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.849 06:38:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:48.849 06:38:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:48.849 06:38:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:48.849 06:38:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:48.849 06:38:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:48.849 06:38:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.849 06:38:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:48.849 06:38:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.849 06:38:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:48.849 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:48.849 06:38:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.849 06:38:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:48.849 06:38:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.849 06:38:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:48.849 06:38:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.849 06:38:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:48.849 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:48.849 06:38:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.849 06:38:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:48.849 06:38:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:48.849 06:38:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:48.849 06:38:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:48.849 06:38:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:48.849 06:38:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.849 06:38:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.849 06:38:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:48.849 06:38:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:48.849 06:38:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:48.849 06:38:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:48.849 06:38:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:48.849 06:38:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:48.849 06:38:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.849 06:38:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:48.849 06:38:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:48.849 06:38:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:48.849 06:38:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:48.849 06:38:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:48.849 06:38:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:48.849 06:38:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:48.849 06:38:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:48.849 06:38:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:48.849 06:38:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:48.849 06:38:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:48.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:48.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:12:48.849 00:12:48.849 --- 10.0.0.2 ping statistics --- 00:12:48.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.849 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:12:48.849 06:38:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:48.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:48.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:12:48.849 00:12:48.849 --- 10.0.0.1 ping statistics --- 00:12:48.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.849 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:12:48.849 06:38:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:48.849 06:38:53 -- nvmf/common.sh@411 -- # return 0 00:12:48.849 06:38:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:48.849 06:38:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:48.849 06:38:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:48.849 06:38:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:48.849 06:38:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:48.849 06:38:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:48.849 06:38:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:48.849 06:38:53 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:48.849 06:38:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:48.849 06:38:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:48.849 06:38:53 -- common/autotest_common.sh@10 -- # set +x 00:12:48.849 06:38:53 -- nvmf/common.sh@470 -- # nvmfpid=4121655 00:12:48.849 06:38:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:48.849 06:38:53 -- nvmf/common.sh@471 -- # waitforlisten 4121655 00:12:48.849 06:38:53 -- common/autotest_common.sh@817 -- # '[' -z 4121655 ']' 00:12:48.849 06:38:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.849 06:38:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:48.849 06:38:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.849 06:38:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:48.850 06:38:53 -- common/autotest_common.sh@10 -- # set +x 00:12:49.108 [2024-04-17 06:38:53.457332] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:12:49.108 [2024-04-17 06:38:53.457405] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.108 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.108 [2024-04-17 06:38:53.541239] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:49.108 [2024-04-17 06:38:53.640495] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:49.108 [2024-04-17 06:38:53.640558] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:49.108 [2024-04-17 06:38:53.640585] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:49.108 [2024-04-17 06:38:53.640606] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:49.108 [2024-04-17 06:38:53.640624] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:49.108 [2024-04-17 06:38:53.640948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.108 [2024-04-17 06:38:53.641013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:49.108 [2024-04-17 06:38:53.641078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:49.108 [2024-04-17 06:38:53.641087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.366 06:38:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:49.366 06:38:53 -- common/autotest_common.sh@850 -- # return 0 00:12:49.366 06:38:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:49.366 06:38:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:49.366 06:38:53 -- common/autotest_common.sh@10 -- # set +x 00:12:49.366 06:38:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:49.366 06:38:53 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:49.366 06:38:53 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:49.366 06:38:53 -- target/multitarget.sh@21 -- # jq length 00:12:49.366 06:38:53 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:49.366 06:38:53 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:49.624 "nvmf_tgt_1" 00:12:49.624 06:38:54 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:49.624 "nvmf_tgt_2" 00:12:49.624 06:38:54 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:49.624 06:38:54 -- target/multitarget.sh@28 -- # jq length 00:12:49.881 06:38:54 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:49.881 06:38:54 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:49.881 true 00:12:49.881 06:38:54 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:49.881 true 00:12:49.881 06:38:54 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:49.881 06:38:54 -- target/multitarget.sh@35 -- # jq length 00:12:50.138 06:38:54 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:50.138 06:38:54 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:50.138 06:38:54 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:50.138 06:38:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:50.138 06:38:54 -- nvmf/common.sh@117 -- # sync 00:12:50.138 06:38:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:50.138 06:38:54 -- nvmf/common.sh@120 -- # set +e 00:12:50.138 06:38:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:50.138 06:38:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:50.138 rmmod nvme_tcp 00:12:50.138 rmmod nvme_fabrics 00:12:50.138 rmmod nvme_keyring 00:12:50.138 06:38:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:50.138 06:38:54 -- nvmf/common.sh@124 -- # set -e 00:12:50.138 06:38:54 -- nvmf/common.sh@125 -- # return 0 00:12:50.138 06:38:54 -- nvmf/common.sh@478 -- # '[' -n 4121655 ']' 00:12:50.138 06:38:54 -- nvmf/common.sh@479 -- # killprocess 4121655 00:12:50.138 06:38:54 -- common/autotest_common.sh@936 -- # '[' -z 4121655 ']' 00:12:50.138 06:38:54 -- common/autotest_common.sh@940 -- # kill -0 4121655 00:12:50.138 06:38:54 -- common/autotest_common.sh@941 -- # uname 00:12:50.138 06:38:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:50.138 06:38:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4121655 00:12:50.138 06:38:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:50.138 06:38:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:50.138 06:38:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4121655' 00:12:50.138 killing process with pid 4121655 00:12:50.138 06:38:54 -- common/autotest_common.sh@955 -- # kill 4121655 00:12:50.138 06:38:54 -- common/autotest_common.sh@960 -- # wait 4121655 00:12:50.397 06:38:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:50.397 06:38:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:50.397 06:38:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:50.397 06:38:54 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:50.397 06:38:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:50.397 06:38:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.397 06:38:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:50.397 06:38:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.935 06:38:56 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:52.935 00:12:52.935 real 0m5.658s 00:12:52.935 user 0m6.547s 00:12:52.935 sys 0m1.881s 00:12:52.935 06:38:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:52.935 06:38:56 -- common/autotest_common.sh@10 -- # set +x 00:12:52.935 ************************************ 00:12:52.935 END TEST nvmf_multitarget 00:12:52.935 ************************************ 00:12:52.935 06:38:56 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:52.935 06:38:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:52.935 06:38:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:52.935 06:38:56 -- common/autotest_common.sh@10 -- # set +x 00:12:52.935 ************************************ 00:12:52.935 START TEST nvmf_rpc 00:12:52.935 ************************************ 00:12:52.935 06:38:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:52.935 * Looking for test storage... 00:12:52.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.935 06:38:57 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.935 06:38:57 -- nvmf/common.sh@7 -- # uname -s 00:12:52.935 06:38:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.935 06:38:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.935 06:38:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.935 06:38:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.936 06:38:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.936 06:38:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.936 06:38:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.936 06:38:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.936 06:38:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.936 06:38:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.936 06:38:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:52.936 06:38:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:52.936 06:38:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.936 06:38:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.936 06:38:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.936 06:38:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.936 06:38:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:52.936 06:38:57 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.936 06:38:57 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.936 06:38:57 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.936 06:38:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.936 06:38:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.936 06:38:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.936 06:38:57 -- paths/export.sh@5 -- # export PATH 00:12:52.936 06:38:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.936 06:38:57 -- nvmf/common.sh@47 -- # : 0 00:12:52.936 06:38:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:52.936 06:38:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:52.936 06:38:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.936 06:38:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.936 06:38:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.936 06:38:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:52.936 06:38:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:52.936 06:38:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:52.936 06:38:57 -- target/rpc.sh@11 -- # loops=5 00:12:52.936 06:38:57 -- target/rpc.sh@23 -- # nvmftestinit 00:12:52.936 06:38:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:52.936 06:38:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.936 06:38:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:52.936 06:38:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:52.936 06:38:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:52.936 06:38:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.936 06:38:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.936 06:38:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.936 06:38:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:52.936 06:38:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:52.936 06:38:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:52.936 06:38:57 -- common/autotest_common.sh@10 -- # set +x 00:12:54.842 06:38:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:54.842 06:38:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:54.842 06:38:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:54.842 06:38:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:54.842 06:38:59 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:54.842 06:38:59 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:54.842 06:38:59 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:54.842 06:38:59 -- nvmf/common.sh@295 -- # net_devs=() 00:12:54.842 06:38:59 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:54.842 06:38:59 -- nvmf/common.sh@296 -- # e810=() 00:12:54.842 06:38:59 -- nvmf/common.sh@296 -- # local -ga e810 00:12:54.842 06:38:59 -- nvmf/common.sh@297 -- # x722=() 00:12:54.842 06:38:59 -- nvmf/common.sh@297 -- # local -ga x722 00:12:54.842 06:38:59 -- nvmf/common.sh@298 -- # mlx=() 00:12:54.842 06:38:59 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:54.842 06:38:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.842 06:38:59 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.842 06:38:59 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.842 06:38:59 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.842 06:38:59 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.842 06:38:59 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.842 06:38:59 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.842 06:38:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.842 06:38:59 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.842 06:38:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.842 06:38:59 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.842 06:38:59 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:54.842 06:38:59 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:54.842 06:38:59 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:54.842 06:38:59 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:54.842 06:38:59 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:54.842 06:38:59 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:54.842 06:38:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:54.842 06:38:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:54.842 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:54.842 06:38:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:54.842 06:38:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:54.842 06:38:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.842 06:38:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.842 06:38:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:54.842 06:38:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:54.842 06:38:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:54.842 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:54.842 06:38:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:54.842 06:38:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:54.842 06:38:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.842 06:38:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.842 06:38:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:54.842 06:38:59 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:54.842 06:38:59 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:54.842 06:38:59 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:54.842 06:38:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:54.842 06:38:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.842 06:38:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:54.842 06:38:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.842 06:38:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:54.842 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:54.842 06:38:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.842 06:38:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:54.842 06:38:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.842 06:38:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:54.842 06:38:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.842 06:38:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:54.842 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:54.842 06:38:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.842 06:38:59 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:54.842 06:38:59 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:54.842 06:38:59 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:54.842 06:38:59 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:54.842 06:38:59 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:54.842 06:38:59 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.842 06:38:59 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.842 06:38:59 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:54.842 06:38:59 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:54.842 06:38:59 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:54.842 06:38:59 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:54.842 06:38:59 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:54.842 06:38:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:54.842 06:38:59 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.842 06:38:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:54.842 06:38:59 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:54.842 06:38:59 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:54.842 06:38:59 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:54.842 06:38:59 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:54.842 06:38:59 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:54.842 06:38:59 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:54.842 06:38:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:54.842 06:38:59 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:54.842 06:38:59 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:54.842 06:38:59 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:54.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:12:54.842 00:12:54.842 --- 10.0.0.2 ping statistics --- 00:12:54.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.842 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:12:54.843 06:38:59 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:54.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:12:54.843 00:12:54.843 --- 10.0.0.1 ping statistics --- 00:12:54.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.843 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:12:54.843 06:38:59 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.843 06:38:59 -- nvmf/common.sh@411 -- # return 0 00:12:54.843 06:38:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:54.843 06:38:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.843 06:38:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:54.843 06:38:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:54.843 06:38:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.843 06:38:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:54.843 06:38:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:54.843 06:38:59 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:54.843 06:38:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:54.843 06:38:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:54.843 06:38:59 -- common/autotest_common.sh@10 -- # set +x 00:12:54.843 06:38:59 -- nvmf/common.sh@470 -- # nvmfpid=4123761 00:12:54.843 06:38:59 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:54.843 06:38:59 -- nvmf/common.sh@471 -- # waitforlisten 4123761 00:12:54.843 06:38:59 -- common/autotest_common.sh@817 -- # '[' -z 4123761 ']' 00:12:54.843 06:38:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.843 06:38:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:54.843 06:38:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.843 06:38:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:54.843 06:38:59 -- common/autotest_common.sh@10 -- # set +x 00:12:54.843 [2024-04-17 06:38:59.363857] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:12:54.843 [2024-04-17 06:38:59.363945] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.843 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.843 [2024-04-17 06:38:59.436333] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:55.101 [2024-04-17 06:38:59.532390] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.101 [2024-04-17 06:38:59.532443] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.101 [2024-04-17 06:38:59.532461] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:55.101 [2024-04-17 06:38:59.532486] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:55.101 [2024-04-17 06:38:59.532498] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.101 [2024-04-17 06:38:59.532554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.101 [2024-04-17 06:38:59.536205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:55.101 [2024-04-17 06:38:59.536241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:55.101 [2024-04-17 06:38:59.536246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.101 06:38:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:55.101 06:38:59 -- common/autotest_common.sh@850 -- # return 0 00:12:55.101 06:38:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:55.101 06:38:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:55.101 06:38:59 -- common/autotest_common.sh@10 -- # set +x 00:12:55.101 06:38:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:55.101 06:38:59 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:55.101 06:38:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.101 06:38:59 -- common/autotest_common.sh@10 -- # set +x 00:12:55.101 06:38:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.101 06:38:59 -- target/rpc.sh@26 -- # stats='{ 00:12:55.101 "tick_rate": 2700000000, 00:12:55.101 "poll_groups": [ 00:12:55.101 { 00:12:55.101 "name": "nvmf_tgt_poll_group_0", 00:12:55.101 "admin_qpairs": 0, 00:12:55.101 "io_qpairs": 0, 00:12:55.101 "current_admin_qpairs": 0, 00:12:55.101 "current_io_qpairs": 0, 00:12:55.101 "pending_bdev_io": 0, 00:12:55.101 "completed_nvme_io": 0, 00:12:55.101 "transports": [] 00:12:55.101 }, 00:12:55.101 { 00:12:55.101 "name": "nvmf_tgt_poll_group_1", 00:12:55.101 "admin_qpairs": 0, 00:12:55.101 "io_qpairs": 0, 00:12:55.101 "current_admin_qpairs": 0, 00:12:55.101 "current_io_qpairs": 0, 00:12:55.101 "pending_bdev_io": 0, 00:12:55.101 "completed_nvme_io": 0, 00:12:55.101 "transports": [] 00:12:55.102 }, 00:12:55.102 { 00:12:55.102 "name": "nvmf_tgt_poll_group_2", 00:12:55.102 "admin_qpairs": 0, 00:12:55.102 "io_qpairs": 0, 00:12:55.102 "current_admin_qpairs": 0, 00:12:55.102 "current_io_qpairs": 0, 00:12:55.102 "pending_bdev_io": 0, 00:12:55.102 "completed_nvme_io": 0, 00:12:55.102 "transports": [] 00:12:55.102 }, 00:12:55.102 { 00:12:55.102 "name": "nvmf_tgt_poll_group_3", 00:12:55.102 "admin_qpairs": 0, 00:12:55.102 "io_qpairs": 0, 00:12:55.102 "current_admin_qpairs": 0, 00:12:55.102 "current_io_qpairs": 0, 00:12:55.102 "pending_bdev_io": 0, 00:12:55.102 "completed_nvme_io": 0, 00:12:55.102 "transports": [] 00:12:55.102 } 00:12:55.102 ] 00:12:55.102 }' 00:12:55.102 06:38:59 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:55.102 06:38:59 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:55.360 06:38:59 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:55.360 06:38:59 -- target/rpc.sh@15 -- # wc -l 00:12:55.360 06:38:59 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:55.360 06:38:59 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:55.360 06:38:59 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:55.360 06:38:59 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:55.360 06:38:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.360 06:38:59 -- common/autotest_common.sh@10 -- # set +x 00:12:55.360 [2024-04-17 06:38:59.778245] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:55.360 06:38:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.360 06:38:59 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:55.360 06:38:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.360 06:38:59 -- common/autotest_common.sh@10 -- # set +x 00:12:55.360 06:38:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.360 06:38:59 -- target/rpc.sh@33 -- # stats='{ 00:12:55.360 "tick_rate": 2700000000, 00:12:55.360 "poll_groups": [ 00:12:55.360 { 00:12:55.360 "name": "nvmf_tgt_poll_group_0", 00:12:55.360 "admin_qpairs": 0, 00:12:55.360 "io_qpairs": 0, 00:12:55.360 "current_admin_qpairs": 0, 00:12:55.360 "current_io_qpairs": 0, 00:12:55.360 "pending_bdev_io": 0, 00:12:55.360 "completed_nvme_io": 0, 00:12:55.360 "transports": [ 00:12:55.360 { 00:12:55.360 "trtype": "TCP" 00:12:55.360 } 00:12:55.360 ] 00:12:55.360 }, 00:12:55.360 { 00:12:55.360 "name": "nvmf_tgt_poll_group_1", 00:12:55.360 "admin_qpairs": 0, 00:12:55.360 "io_qpairs": 0, 00:12:55.360 "current_admin_qpairs": 0, 00:12:55.360 "current_io_qpairs": 0, 00:12:55.360 "pending_bdev_io": 0, 00:12:55.360 "completed_nvme_io": 0, 00:12:55.360 "transports": [ 00:12:55.360 { 00:12:55.360 "trtype": "TCP" 00:12:55.360 } 00:12:55.360 ] 00:12:55.360 }, 00:12:55.360 { 00:12:55.360 "name": "nvmf_tgt_poll_group_2", 00:12:55.360 "admin_qpairs": 0, 00:12:55.360 "io_qpairs": 0, 00:12:55.360 "current_admin_qpairs": 0, 00:12:55.360 "current_io_qpairs": 0, 00:12:55.360 "pending_bdev_io": 0, 00:12:55.360 "completed_nvme_io": 0, 00:12:55.360 "transports": [ 00:12:55.360 { 00:12:55.360 "trtype": "TCP" 00:12:55.360 } 00:12:55.360 ] 00:12:55.360 }, 00:12:55.360 { 00:12:55.360 "name": "nvmf_tgt_poll_group_3", 00:12:55.360 "admin_qpairs": 0, 00:12:55.360 "io_qpairs": 0, 00:12:55.360 "current_admin_qpairs": 0, 00:12:55.360 "current_io_qpairs": 0, 00:12:55.360 "pending_bdev_io": 0, 00:12:55.360 "completed_nvme_io": 0, 00:12:55.360 "transports": [ 00:12:55.360 { 00:12:55.360 "trtype": "TCP" 00:12:55.360 } 00:12:55.360 ] 00:12:55.360 } 00:12:55.360 ] 00:12:55.360 }' 00:12:55.360 06:38:59 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:55.360 06:38:59 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:55.360 06:38:59 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:55.360 06:38:59 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:55.360 06:38:59 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:55.360 06:38:59 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:55.360 06:38:59 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:55.360 06:38:59 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:55.360 06:38:59 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:55.360 06:38:59 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:55.360 06:38:59 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:55.360 06:38:59 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:55.360 06:38:59 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:55.360 06:38:59 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:55.360 06:38:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.360 06:38:59 -- common/autotest_common.sh@10 -- # set +x 00:12:55.360 Malloc1 00:12:55.360 06:38:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.360 06:38:59 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:55.360 06:38:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.360 06:38:59 -- common/autotest_common.sh@10 -- # set +x 00:12:55.360 06:38:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.360 06:38:59 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:55.360 06:38:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.360 06:38:59 -- common/autotest_common.sh@10 -- # set +x 00:12:55.360 06:38:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.360 06:38:59 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:55.361 06:38:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.361 06:38:59 -- common/autotest_common.sh@10 -- # set +x 00:12:55.361 06:38:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.361 06:38:59 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.361 06:38:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.361 06:38:59 -- common/autotest_common.sh@10 -- # set +x 00:12:55.361 [2024-04-17 06:38:59.927542] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.361 06:38:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.361 06:38:59 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:55.361 06:38:59 -- common/autotest_common.sh@638 -- # local es=0 00:12:55.361 06:38:59 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:55.361 06:38:59 -- common/autotest_common.sh@626 -- # local arg=nvme 00:12:55.361 06:38:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:55.361 06:38:59 -- common/autotest_common.sh@630 -- # type -t nvme 00:12:55.361 06:38:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:55.361 06:38:59 -- common/autotest_common.sh@632 -- # type -P nvme 00:12:55.361 06:38:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:55.361 06:38:59 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:12:55.361 06:38:59 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:12:55.361 06:38:59 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:55.361 [2024-04-17 06:38:59.950019] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:55.361 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:55.361 could not add new controller: failed to write to nvme-fabrics device 00:12:55.361 06:38:59 -- common/autotest_common.sh@641 -- # es=1 00:12:55.361 06:38:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:55.361 06:38:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:55.361 06:38:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:55.361 06:38:59 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:55.361 06:38:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.361 06:38:59 -- common/autotest_common.sh@10 -- # set +x 00:12:55.361 06:38:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.361 06:38:59 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.926 06:39:00 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:55.926 06:39:00 -- common/autotest_common.sh@1184 -- # local i=0 00:12:55.926 06:39:00 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.926 06:39:00 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:55.926 06:39:00 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:58.452 06:39:02 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:58.452 06:39:02 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:58.452 06:39:02 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.452 06:39:02 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:58.452 06:39:02 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.452 06:39:02 -- common/autotest_common.sh@1194 -- # return 0 00:12:58.452 06:39:02 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.452 06:39:02 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:58.452 06:39:02 -- common/autotest_common.sh@1205 -- # local i=0 00:12:58.452 06:39:02 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:58.452 06:39:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.452 06:39:02 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:58.452 06:39:02 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.452 06:39:02 -- common/autotest_common.sh@1217 -- # return 0 00:12:58.452 06:39:02 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:58.452 06:39:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:58.452 06:39:02 -- common/autotest_common.sh@10 -- # set +x 00:12:58.452 06:39:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:58.452 06:39:02 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.452 06:39:02 -- common/autotest_common.sh@638 -- # local es=0 00:12:58.452 06:39:02 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.452 06:39:02 -- common/autotest_common.sh@626 -- # local arg=nvme 00:12:58.452 06:39:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:58.452 06:39:02 -- common/autotest_common.sh@630 -- # type -t nvme 00:12:58.452 06:39:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:58.452 06:39:02 -- common/autotest_common.sh@632 -- # type -P nvme 00:12:58.452 06:39:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:58.452 06:39:02 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:12:58.452 06:39:02 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:12:58.452 06:39:02 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.452 [2024-04-17 06:39:02.614504] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:58.452 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:58.452 could not add new controller: failed to write to nvme-fabrics device 00:12:58.452 06:39:02 -- common/autotest_common.sh@641 -- # es=1 00:12:58.452 06:39:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:58.452 06:39:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:58.452 06:39:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:58.452 06:39:02 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:58.452 06:39:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:58.452 06:39:02 -- common/autotest_common.sh@10 -- # set +x 00:12:58.453 06:39:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:58.453 06:39:02 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.711 06:39:03 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:58.711 06:39:03 -- common/autotest_common.sh@1184 -- # local i=0 00:12:58.711 06:39:03 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:58.711 06:39:03 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:58.711 06:39:03 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:01.274 06:39:05 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:01.274 06:39:05 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:01.274 06:39:05 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.274 06:39:05 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:01.274 06:39:05 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.274 06:39:05 -- common/autotest_common.sh@1194 -- # return 0 00:13:01.274 06:39:05 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.274 06:39:05 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:01.274 06:39:05 -- common/autotest_common.sh@1205 -- # local i=0 00:13:01.274 06:39:05 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:01.274 06:39:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.274 06:39:05 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:01.274 06:39:05 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.274 06:39:05 -- common/autotest_common.sh@1217 -- # return 0 00:13:01.274 06:39:05 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.274 06:39:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.274 06:39:05 -- common/autotest_common.sh@10 -- # set +x 00:13:01.274 06:39:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.274 06:39:05 -- target/rpc.sh@81 -- # seq 1 5 00:13:01.274 06:39:05 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:01.274 06:39:05 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.274 06:39:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.274 06:39:05 -- common/autotest_common.sh@10 -- # set +x 00:13:01.274 06:39:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.274 06:39:05 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.274 06:39:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.274 06:39:05 -- common/autotest_common.sh@10 -- # set +x 00:13:01.274 [2024-04-17 06:39:05.415274] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.274 06:39:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.274 06:39:05 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:01.274 06:39:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.274 06:39:05 -- common/autotest_common.sh@10 -- # set +x 00:13:01.274 06:39:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.274 06:39:05 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.274 06:39:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.274 06:39:05 -- common/autotest_common.sh@10 -- # set +x 00:13:01.274 06:39:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.274 06:39:05 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:01.532 06:39:05 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:01.532 06:39:05 -- common/autotest_common.sh@1184 -- # local i=0 00:13:01.532 06:39:05 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.532 06:39:05 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:01.532 06:39:05 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:03.431 06:39:07 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:03.431 06:39:07 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:03.431 06:39:07 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.431 06:39:08 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:03.431 06:39:08 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.431 06:39:08 -- common/autotest_common.sh@1194 -- # return 0 00:13:03.431 06:39:08 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:03.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.689 06:39:08 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:03.689 06:39:08 -- common/autotest_common.sh@1205 -- # local i=0 00:13:03.689 06:39:08 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:03.689 06:39:08 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.689 06:39:08 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:03.689 06:39:08 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.689 06:39:08 -- common/autotest_common.sh@1217 -- # return 0 00:13:03.689 06:39:08 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:03.689 06:39:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.689 06:39:08 -- common/autotest_common.sh@10 -- # set +x 00:13:03.689 06:39:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.689 06:39:08 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.689 06:39:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.689 06:39:08 -- common/autotest_common.sh@10 -- # set +x 00:13:03.689 06:39:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.689 06:39:08 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:03.689 06:39:08 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:03.689 06:39:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.689 06:39:08 -- common/autotest_common.sh@10 -- # set +x 00:13:03.689 06:39:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.689 06:39:08 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.689 06:39:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.689 06:39:08 -- common/autotest_common.sh@10 -- # set +x 00:13:03.689 [2024-04-17 06:39:08.179653] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.689 06:39:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.689 06:39:08 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:03.689 06:39:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.689 06:39:08 -- common/autotest_common.sh@10 -- # set +x 00:13:03.689 06:39:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.689 06:39:08 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:03.689 06:39:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.689 06:39:08 -- common/autotest_common.sh@10 -- # set +x 00:13:03.689 06:39:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.689 06:39:08 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:04.255 06:39:08 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:04.255 06:39:08 -- common/autotest_common.sh@1184 -- # local i=0 00:13:04.255 06:39:08 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.255 06:39:08 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:04.255 06:39:08 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:06.782 06:39:10 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:06.782 06:39:10 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:06.782 06:39:10 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.782 06:39:10 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:06.782 06:39:10 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.782 06:39:10 -- common/autotest_common.sh@1194 -- # return 0 00:13:06.782 06:39:10 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:06.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.782 06:39:10 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:06.782 06:39:10 -- common/autotest_common.sh@1205 -- # local i=0 00:13:06.782 06:39:10 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:06.782 06:39:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.782 06:39:11 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:06.782 06:39:11 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.782 06:39:11 -- common/autotest_common.sh@1217 -- # return 0 00:13:06.782 06:39:11 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:06.782 06:39:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.782 06:39:11 -- common/autotest_common.sh@10 -- # set +x 00:13:06.782 06:39:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.782 06:39:11 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:06.782 06:39:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.782 06:39:11 -- common/autotest_common.sh@10 -- # set +x 00:13:06.782 06:39:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.782 06:39:11 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:06.782 06:39:11 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:06.782 06:39:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.782 06:39:11 -- common/autotest_common.sh@10 -- # set +x 00:13:06.782 06:39:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.782 06:39:11 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.782 06:39:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.782 06:39:11 -- common/autotest_common.sh@10 -- # set +x 00:13:06.782 [2024-04-17 06:39:11.038756] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.782 06:39:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.782 06:39:11 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:06.782 06:39:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.782 06:39:11 -- common/autotest_common.sh@10 -- # set +x 00:13:06.782 06:39:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.782 06:39:11 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:06.782 06:39:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.782 06:39:11 -- common/autotest_common.sh@10 -- # set +x 00:13:06.782 06:39:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.782 06:39:11 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:07.346 06:39:11 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:07.346 06:39:11 -- common/autotest_common.sh@1184 -- # local i=0 00:13:07.346 06:39:11 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:07.346 06:39:11 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:07.346 06:39:11 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:09.244 06:39:13 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:09.244 06:39:13 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:09.244 06:39:13 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:09.244 06:39:13 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:09.244 06:39:13 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:09.244 06:39:13 -- common/autotest_common.sh@1194 -- # return 0 00:13:09.244 06:39:13 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:09.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.244 06:39:13 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:09.244 06:39:13 -- common/autotest_common.sh@1205 -- # local i=0 00:13:09.244 06:39:13 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:09.244 06:39:13 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.244 06:39:13 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:09.244 06:39:13 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.244 06:39:13 -- common/autotest_common.sh@1217 -- # return 0 00:13:09.244 06:39:13 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:09.244 06:39:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:09.244 06:39:13 -- common/autotest_common.sh@10 -- # set +x 00:13:09.244 06:39:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:09.244 06:39:13 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.244 06:39:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:09.244 06:39:13 -- common/autotest_common.sh@10 -- # set +x 00:13:09.244 06:39:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:09.244 06:39:13 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:09.244 06:39:13 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:09.244 06:39:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:09.244 06:39:13 -- common/autotest_common.sh@10 -- # set +x 00:13:09.244 06:39:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:09.244 06:39:13 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:09.244 06:39:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:09.244 06:39:13 -- common/autotest_common.sh@10 -- # set +x 00:13:09.244 [2024-04-17 06:39:13.810968] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.244 06:39:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:09.244 06:39:13 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:09.244 06:39:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:09.244 06:39:13 -- common/autotest_common.sh@10 -- # set +x 00:13:09.244 06:39:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:09.244 06:39:13 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:09.244 06:39:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:09.244 06:39:13 -- common/autotest_common.sh@10 -- # set +x 00:13:09.244 06:39:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:09.244 06:39:13 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:10.177 06:39:14 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:10.177 06:39:14 -- common/autotest_common.sh@1184 -- # local i=0 00:13:10.177 06:39:14 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:10.177 06:39:14 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:10.177 06:39:14 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:12.074 06:39:16 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:12.074 06:39:16 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:12.074 06:39:16 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:12.074 06:39:16 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:12.074 06:39:16 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:12.074 06:39:16 -- common/autotest_common.sh@1194 -- # return 0 00:13:12.074 06:39:16 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:12.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.074 06:39:16 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:12.074 06:39:16 -- common/autotest_common.sh@1205 -- # local i=0 00:13:12.074 06:39:16 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:12.074 06:39:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.074 06:39:16 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:12.074 06:39:16 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.074 06:39:16 -- common/autotest_common.sh@1217 -- # return 0 00:13:12.074 06:39:16 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:12.074 06:39:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:12.074 06:39:16 -- common/autotest_common.sh@10 -- # set +x 00:13:12.074 06:39:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:12.074 06:39:16 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.074 06:39:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:12.074 06:39:16 -- common/autotest_common.sh@10 -- # set +x 00:13:12.074 06:39:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:12.074 06:39:16 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:12.074 06:39:16 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:12.074 06:39:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:12.074 06:39:16 -- common/autotest_common.sh@10 -- # set +x 00:13:12.074 06:39:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:12.074 06:39:16 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.074 06:39:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:12.074 06:39:16 -- common/autotest_common.sh@10 -- # set +x 00:13:12.074 [2024-04-17 06:39:16.582407] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.074 06:39:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:12.074 06:39:16 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:12.074 06:39:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:12.074 06:39:16 -- common/autotest_common.sh@10 -- # set +x 00:13:12.074 06:39:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:12.074 06:39:16 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:12.074 06:39:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:12.074 06:39:16 -- common/autotest_common.sh@10 -- # set +x 00:13:12.074 06:39:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:12.074 06:39:16 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:12.638 06:39:17 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:12.638 06:39:17 -- common/autotest_common.sh@1184 -- # local i=0 00:13:12.638 06:39:17 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:12.638 06:39:17 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:12.638 06:39:17 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:15.162 06:39:19 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:15.162 06:39:19 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:15.162 06:39:19 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:15.162 06:39:19 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:15.162 06:39:19 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:15.162 06:39:19 -- common/autotest_common.sh@1194 -- # return 0 00:13:15.162 06:39:19 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:15.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.162 06:39:19 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:15.162 06:39:19 -- common/autotest_common.sh@1205 -- # local i=0 00:13:15.162 06:39:19 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:15.162 06:39:19 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.162 06:39:19 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:15.162 06:39:19 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.162 06:39:19 -- common/autotest_common.sh@1217 -- # return 0 00:13:15.162 06:39:19 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:15.162 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.162 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.162 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.162 06:39:19 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.162 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.162 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.162 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.162 06:39:19 -- target/rpc.sh@99 -- # seq 1 5 00:13:15.163 06:39:19 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:15.163 06:39:19 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:15.163 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.163 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.163 06:39:19 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.163 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.163 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 [2024-04-17 06:39:19.308493] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.163 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.163 06:39:19 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:15.163 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.163 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.163 06:39:19 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:15.163 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.163 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.163 06:39:19 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.163 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.163 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.163 06:39:19 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.163 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.163 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.163 06:39:19 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:15.163 06:39:19 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:15.163 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.163 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.163 06:39:19 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.163 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.163 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 [2024-04-17 06:39:19.356550] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.163 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.163 06:39:19 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:15.163 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.163 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.163 06:39:19 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:15.163 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.163 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.163 06:39:19 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.163 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.163 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.163 06:39:19 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.163 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.163 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.163 06:39:19 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:15.163 06:39:19 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:15.163 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.163 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.163 06:39:19 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.163 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.163 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 [2024-04-17 06:39:19.404675] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.163 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.163 06:39:19 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:15.163 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.163 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.163 06:39:19 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:15.163 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.163 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.163 06:39:19 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.163 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.163 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.163 06:39:19 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.163 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.163 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.163 06:39:19 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:15.163 06:39:19 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:15.163 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.163 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.163 06:39:19 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.163 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.163 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 [2024-04-17 06:39:19.452838] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.163 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.163 06:39:19 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:15.163 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.163 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.163 06:39:19 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:15.163 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.163 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.163 06:39:19 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.163 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.163 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.163 06:39:19 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.163 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.163 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.163 06:39:19 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:15.163 06:39:19 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:15.163 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.163 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.163 06:39:19 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.163 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.163 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 [2024-04-17 06:39:19.501008] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.163 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.163 06:39:19 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:15.163 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.163 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.163 06:39:19 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:15.163 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.163 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.163 06:39:19 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.163 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.163 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.163 06:39:19 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.163 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.163 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.163 06:39:19 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:15.163 06:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.163 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.163 06:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.163 06:39:19 -- target/rpc.sh@110 -- # stats='{ 00:13:15.163 "tick_rate": 2700000000, 00:13:15.163 "poll_groups": [ 00:13:15.163 { 00:13:15.163 "name": "nvmf_tgt_poll_group_0", 00:13:15.163 "admin_qpairs": 2, 00:13:15.163 "io_qpairs": 84, 00:13:15.163 "current_admin_qpairs": 0, 00:13:15.163 "current_io_qpairs": 0, 00:13:15.163 "pending_bdev_io": 0, 00:13:15.163 "completed_nvme_io": 136, 00:13:15.163 "transports": [ 00:13:15.163 { 00:13:15.163 "trtype": "TCP" 00:13:15.163 } 00:13:15.163 ] 00:13:15.163 }, 00:13:15.163 { 00:13:15.163 "name": "nvmf_tgt_poll_group_1", 00:13:15.163 "admin_qpairs": 2, 00:13:15.163 "io_qpairs": 84, 00:13:15.163 "current_admin_qpairs": 0, 00:13:15.163 "current_io_qpairs": 0, 00:13:15.163 "pending_bdev_io": 0, 00:13:15.163 "completed_nvme_io": 102, 00:13:15.163 "transports": [ 00:13:15.163 { 00:13:15.163 "trtype": "TCP" 00:13:15.163 } 00:13:15.163 ] 00:13:15.163 }, 00:13:15.163 { 00:13:15.163 "name": "nvmf_tgt_poll_group_2", 00:13:15.163 "admin_qpairs": 1, 00:13:15.163 "io_qpairs": 84, 00:13:15.163 "current_admin_qpairs": 0, 00:13:15.163 "current_io_qpairs": 0, 00:13:15.163 "pending_bdev_io": 0, 00:13:15.163 "completed_nvme_io": 208, 00:13:15.163 "transports": [ 00:13:15.163 { 00:13:15.163 "trtype": "TCP" 00:13:15.163 } 00:13:15.163 ] 00:13:15.163 }, 00:13:15.163 { 00:13:15.163 "name": "nvmf_tgt_poll_group_3", 00:13:15.163 "admin_qpairs": 2, 00:13:15.163 "io_qpairs": 84, 00:13:15.163 "current_admin_qpairs": 0, 00:13:15.163 "current_io_qpairs": 0, 00:13:15.163 "pending_bdev_io": 0, 00:13:15.163 "completed_nvme_io": 240, 00:13:15.163 "transports": [ 00:13:15.163 { 00:13:15.163 "trtype": "TCP" 00:13:15.163 } 00:13:15.163 ] 00:13:15.163 } 00:13:15.163 ] 00:13:15.163 }' 00:13:15.163 06:39:19 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:15.163 06:39:19 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:15.163 06:39:19 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:15.163 06:39:19 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:15.163 06:39:19 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:15.163 06:39:19 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:15.163 06:39:19 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:15.163 06:39:19 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:15.163 06:39:19 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:15.163 06:39:19 -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:15.163 06:39:19 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:15.163 06:39:19 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:15.163 06:39:19 -- target/rpc.sh@123 -- # nvmftestfini 00:13:15.163 06:39:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:15.163 06:39:19 -- nvmf/common.sh@117 -- # sync 00:13:15.163 06:39:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:15.163 06:39:19 -- nvmf/common.sh@120 -- # set +e 00:13:15.163 06:39:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:15.163 06:39:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:15.163 rmmod nvme_tcp 00:13:15.163 rmmod nvme_fabrics 00:13:15.163 rmmod nvme_keyring 00:13:15.163 06:39:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:15.163 06:39:19 -- nvmf/common.sh@124 -- # set -e 00:13:15.163 06:39:19 -- nvmf/common.sh@125 -- # return 0 00:13:15.163 06:39:19 -- nvmf/common.sh@478 -- # '[' -n 4123761 ']' 00:13:15.163 06:39:19 -- nvmf/common.sh@479 -- # killprocess 4123761 00:13:15.163 06:39:19 -- common/autotest_common.sh@936 -- # '[' -z 4123761 ']' 00:13:15.163 06:39:19 -- common/autotest_common.sh@940 -- # kill -0 4123761 00:13:15.163 06:39:19 -- common/autotest_common.sh@941 -- # uname 00:13:15.163 06:39:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:15.163 06:39:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4123761 00:13:15.163 06:39:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:15.163 06:39:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:15.163 06:39:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4123761' 00:13:15.163 killing process with pid 4123761 00:13:15.163 06:39:19 -- common/autotest_common.sh@955 -- # kill 4123761 00:13:15.163 06:39:19 -- common/autotest_common.sh@960 -- # wait 4123761 00:13:15.421 06:39:19 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:15.421 06:39:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:15.421 06:39:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:15.421 06:39:19 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:15.421 06:39:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:15.421 06:39:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.421 06:39:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:15.421 06:39:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.957 06:39:22 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:17.957 00:13:17.957 real 0m24.930s 00:13:17.957 user 1m20.718s 00:13:17.957 sys 0m3.884s 00:13:17.957 06:39:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:17.957 06:39:22 -- common/autotest_common.sh@10 -- # set +x 00:13:17.957 ************************************ 00:13:17.957 END TEST nvmf_rpc 00:13:17.957 ************************************ 00:13:17.957 06:39:22 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:17.957 06:39:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:17.957 06:39:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:17.957 06:39:22 -- common/autotest_common.sh@10 -- # set +x 00:13:17.957 ************************************ 00:13:17.957 START TEST nvmf_invalid 00:13:17.957 ************************************ 00:13:17.957 06:39:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:17.957 * Looking for test storage... 00:13:17.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:17.957 06:39:22 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:17.957 06:39:22 -- nvmf/common.sh@7 -- # uname -s 00:13:17.957 06:39:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.957 06:39:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.957 06:39:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.957 06:39:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.957 06:39:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.957 06:39:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.957 06:39:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.957 06:39:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.957 06:39:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.957 06:39:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.957 06:39:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:17.957 06:39:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:17.957 06:39:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.957 06:39:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.957 06:39:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:17.957 06:39:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:17.957 06:39:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:17.957 06:39:22 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.957 06:39:22 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.957 06:39:22 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.957 06:39:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.957 06:39:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.957 06:39:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.957 06:39:22 -- paths/export.sh@5 -- # export PATH 00:13:17.957 06:39:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.957 06:39:22 -- nvmf/common.sh@47 -- # : 0 00:13:17.957 06:39:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:17.957 06:39:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:17.957 06:39:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:17.957 06:39:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.957 06:39:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.957 06:39:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:17.957 06:39:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:17.957 06:39:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:17.957 06:39:22 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:17.957 06:39:22 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:17.957 06:39:22 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:17.957 06:39:22 -- target/invalid.sh@14 -- # target=foobar 00:13:17.957 06:39:22 -- target/invalid.sh@16 -- # RANDOM=0 00:13:17.957 06:39:22 -- target/invalid.sh@34 -- # nvmftestinit 00:13:17.958 06:39:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:17.958 06:39:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:17.958 06:39:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:17.958 06:39:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:17.958 06:39:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:17.958 06:39:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.958 06:39:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:17.958 06:39:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.958 06:39:22 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:17.958 06:39:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:17.958 06:39:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:17.958 06:39:22 -- common/autotest_common.sh@10 -- # set +x 00:13:19.914 06:39:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:19.914 06:39:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:19.914 06:39:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:19.914 06:39:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:19.914 06:39:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:19.914 06:39:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:19.914 06:39:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:19.914 06:39:24 -- nvmf/common.sh@295 -- # net_devs=() 00:13:19.914 06:39:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:19.914 06:39:24 -- nvmf/common.sh@296 -- # e810=() 00:13:19.914 06:39:24 -- nvmf/common.sh@296 -- # local -ga e810 00:13:19.914 06:39:24 -- nvmf/common.sh@297 -- # x722=() 00:13:19.914 06:39:24 -- nvmf/common.sh@297 -- # local -ga x722 00:13:19.914 06:39:24 -- nvmf/common.sh@298 -- # mlx=() 00:13:19.914 06:39:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:19.914 06:39:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:19.914 06:39:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:19.914 06:39:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:19.914 06:39:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:19.914 06:39:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:19.914 06:39:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:19.914 06:39:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:19.914 06:39:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:19.914 06:39:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:19.914 06:39:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:19.914 06:39:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:19.914 06:39:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:19.914 06:39:24 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:19.914 06:39:24 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:19.914 06:39:24 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:19.914 06:39:24 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:19.914 06:39:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:19.914 06:39:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:19.914 06:39:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:19.914 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:19.914 06:39:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:19.914 06:39:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:19.914 06:39:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.914 06:39:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.914 06:39:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:19.914 06:39:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:19.914 06:39:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:19.914 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:19.914 06:39:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:19.914 06:39:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:19.914 06:39:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.914 06:39:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.914 06:39:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:19.914 06:39:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:19.914 06:39:24 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:19.914 06:39:24 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:19.914 06:39:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:19.914 06:39:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.914 06:39:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:19.914 06:39:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.914 06:39:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:19.914 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:19.914 06:39:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.914 06:39:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:19.914 06:39:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.914 06:39:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:19.914 06:39:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.914 06:39:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:19.914 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:19.914 06:39:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.914 06:39:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:19.914 06:39:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:19.914 06:39:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:19.914 06:39:24 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:19.914 06:39:24 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:19.914 06:39:24 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:19.914 06:39:24 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:19.914 06:39:24 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:19.914 06:39:24 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:19.914 06:39:24 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:19.914 06:39:24 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:19.914 06:39:24 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:19.914 06:39:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:19.914 06:39:24 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:19.914 06:39:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:19.914 06:39:24 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:19.914 06:39:24 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:19.914 06:39:24 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:19.914 06:39:24 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:19.914 06:39:24 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:19.914 06:39:24 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:19.914 06:39:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:19.914 06:39:24 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:19.914 06:39:24 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:19.914 06:39:24 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:19.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:19.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:13:19.914 00:13:19.914 --- 10.0.0.2 ping statistics --- 00:13:19.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.914 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:13:19.914 06:39:24 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:19.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:19.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:13:19.914 00:13:19.914 --- 10.0.0.1 ping statistics --- 00:13:19.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.914 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:13:19.914 06:39:24 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:19.914 06:39:24 -- nvmf/common.sh@411 -- # return 0 00:13:19.914 06:39:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:19.914 06:39:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:19.914 06:39:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:19.914 06:39:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:19.914 06:39:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:19.914 06:39:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:19.914 06:39:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:19.914 06:39:24 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:19.914 06:39:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:19.914 06:39:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:19.914 06:39:24 -- common/autotest_common.sh@10 -- # set +x 00:13:19.914 06:39:24 -- nvmf/common.sh@470 -- # nvmfpid=4128877 00:13:19.914 06:39:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:19.914 06:39:24 -- nvmf/common.sh@471 -- # waitforlisten 4128877 00:13:19.914 06:39:24 -- common/autotest_common.sh@817 -- # '[' -z 4128877 ']' 00:13:19.914 06:39:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.914 06:39:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:19.914 06:39:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.914 06:39:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:19.914 06:39:24 -- common/autotest_common.sh@10 -- # set +x 00:13:19.914 [2024-04-17 06:39:24.372806] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:13:19.914 [2024-04-17 06:39:24.372909] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.914 EAL: No free 2048 kB hugepages reported on node 1 00:13:19.914 [2024-04-17 06:39:24.443866] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:20.172 [2024-04-17 06:39:24.541366] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.172 [2024-04-17 06:39:24.541425] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.172 [2024-04-17 06:39:24.541442] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.172 [2024-04-17 06:39:24.541456] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.172 [2024-04-17 06:39:24.541467] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.172 [2024-04-17 06:39:24.541546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.172 [2024-04-17 06:39:24.541601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.172 [2024-04-17 06:39:24.541666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:20.172 [2024-04-17 06:39:24.541669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.172 06:39:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:20.172 06:39:24 -- common/autotest_common.sh@850 -- # return 0 00:13:20.172 06:39:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:20.172 06:39:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:20.172 06:39:24 -- common/autotest_common.sh@10 -- # set +x 00:13:20.172 06:39:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.172 06:39:24 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:20.172 06:39:24 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode798 00:13:20.430 [2024-04-17 06:39:24.918774] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:20.430 06:39:24 -- target/invalid.sh@40 -- # out='request: 00:13:20.430 { 00:13:20.430 "nqn": "nqn.2016-06.io.spdk:cnode798", 00:13:20.430 "tgt_name": "foobar", 00:13:20.430 "method": "nvmf_create_subsystem", 00:13:20.430 "req_id": 1 00:13:20.430 } 00:13:20.430 Got JSON-RPC error response 00:13:20.430 response: 00:13:20.430 { 00:13:20.430 "code": -32603, 00:13:20.430 "message": "Unable to find target foobar" 00:13:20.430 }' 00:13:20.430 06:39:24 -- target/invalid.sh@41 -- # [[ request: 00:13:20.430 { 00:13:20.430 "nqn": "nqn.2016-06.io.spdk:cnode798", 00:13:20.430 "tgt_name": "foobar", 00:13:20.430 "method": "nvmf_create_subsystem", 00:13:20.430 "req_id": 1 00:13:20.430 } 00:13:20.430 Got JSON-RPC error response 00:13:20.430 response: 00:13:20.430 { 00:13:20.430 "code": -32603, 00:13:20.430 "message": "Unable to find target foobar" 00:13:20.430 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:20.430 06:39:24 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:20.430 06:39:24 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode8409 00:13:20.687 [2024-04-17 06:39:25.163618] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8409: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:20.687 06:39:25 -- target/invalid.sh@45 -- # out='request: 00:13:20.687 { 00:13:20.687 "nqn": "nqn.2016-06.io.spdk:cnode8409", 00:13:20.687 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:20.687 "method": "nvmf_create_subsystem", 00:13:20.687 "req_id": 1 00:13:20.687 } 00:13:20.687 Got JSON-RPC error response 00:13:20.687 response: 00:13:20.687 { 00:13:20.687 "code": -32602, 00:13:20.687 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:20.687 }' 00:13:20.687 06:39:25 -- target/invalid.sh@46 -- # [[ request: 00:13:20.687 { 00:13:20.687 "nqn": "nqn.2016-06.io.spdk:cnode8409", 00:13:20.687 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:20.687 "method": "nvmf_create_subsystem", 00:13:20.687 "req_id": 1 00:13:20.687 } 00:13:20.687 Got JSON-RPC error response 00:13:20.687 response: 00:13:20.687 { 00:13:20.687 "code": -32602, 00:13:20.687 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:20.687 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:20.688 06:39:25 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:20.688 06:39:25 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode28546 00:13:20.945 [2024-04-17 06:39:25.404407] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28546: invalid model number 'SPDK_Controller' 00:13:20.945 06:39:25 -- target/invalid.sh@50 -- # out='request: 00:13:20.945 { 00:13:20.945 "nqn": "nqn.2016-06.io.spdk:cnode28546", 00:13:20.945 "model_number": "SPDK_Controller\u001f", 00:13:20.945 "method": "nvmf_create_subsystem", 00:13:20.945 "req_id": 1 00:13:20.945 } 00:13:20.945 Got JSON-RPC error response 00:13:20.945 response: 00:13:20.945 { 00:13:20.945 "code": -32602, 00:13:20.945 "message": "Invalid MN SPDK_Controller\u001f" 00:13:20.945 }' 00:13:20.945 06:39:25 -- target/invalid.sh@51 -- # [[ request: 00:13:20.945 { 00:13:20.945 "nqn": "nqn.2016-06.io.spdk:cnode28546", 00:13:20.945 "model_number": "SPDK_Controller\u001f", 00:13:20.945 "method": "nvmf_create_subsystem", 00:13:20.945 "req_id": 1 00:13:20.945 } 00:13:20.945 Got JSON-RPC error response 00:13:20.946 response: 00:13:20.946 { 00:13:20.946 "code": -32602, 00:13:20.946 "message": "Invalid MN SPDK_Controller\u001f" 00:13:20.946 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:20.946 06:39:25 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:20.946 06:39:25 -- target/invalid.sh@19 -- # local length=21 ll 00:13:20.946 06:39:25 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:20.946 06:39:25 -- target/invalid.sh@21 -- # local chars 00:13:20.946 06:39:25 -- target/invalid.sh@22 -- # local string 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # printf %x 79 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # string+=O 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # printf %x 119 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # string+=w 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # printf %x 78 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # string+=N 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # printf %x 56 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # string+=8 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # printf %x 35 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # string+='#' 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # printf %x 111 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # string+=o 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # printf %x 76 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # string+=L 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # printf %x 56 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # string+=8 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # printf %x 97 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # string+=a 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # printf %x 36 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # string+='$' 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # printf %x 61 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # string+== 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # printf %x 51 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # string+=3 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # printf %x 39 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # string+=\' 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # printf %x 120 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # string+=x 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # printf %x 101 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # string+=e 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # printf %x 126 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # string+='~' 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # printf %x 122 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # string+=z 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # printf %x 105 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # string+=i 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # printf %x 92 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # string+='\' 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # printf %x 84 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # string+=T 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # printf %x 113 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:20.946 06:39:25 -- target/invalid.sh@25 -- # string+=q 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.946 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.946 06:39:25 -- target/invalid.sh@28 -- # [[ O == \- ]] 00:13:20.946 06:39:25 -- target/invalid.sh@31 -- # echo 'OwN8#oL8a$=3'\''xe~zi\Tq' 00:13:20.946 06:39:25 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'OwN8#oL8a$=3'\''xe~zi\Tq' nqn.2016-06.io.spdk:cnode13605 00:13:21.204 [2024-04-17 06:39:25.713466] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13605: invalid serial number 'OwN8#oL8a$=3'xe~zi\Tq' 00:13:21.204 06:39:25 -- target/invalid.sh@54 -- # out='request: 00:13:21.204 { 00:13:21.204 "nqn": "nqn.2016-06.io.spdk:cnode13605", 00:13:21.204 "serial_number": "OwN8#oL8a$=3'\''xe~zi\\Tq", 00:13:21.204 "method": "nvmf_create_subsystem", 00:13:21.204 "req_id": 1 00:13:21.204 } 00:13:21.204 Got JSON-RPC error response 00:13:21.204 response: 00:13:21.204 { 00:13:21.204 "code": -32602, 00:13:21.204 "message": "Invalid SN OwN8#oL8a$=3'\''xe~zi\\Tq" 00:13:21.204 }' 00:13:21.204 06:39:25 -- target/invalid.sh@55 -- # [[ request: 00:13:21.204 { 00:13:21.204 "nqn": "nqn.2016-06.io.spdk:cnode13605", 00:13:21.204 "serial_number": "OwN8#oL8a$=3'xe~zi\\Tq", 00:13:21.204 "method": "nvmf_create_subsystem", 00:13:21.204 "req_id": 1 00:13:21.204 } 00:13:21.204 Got JSON-RPC error response 00:13:21.204 response: 00:13:21.204 { 00:13:21.204 "code": -32602, 00:13:21.204 "message": "Invalid SN OwN8#oL8a$=3'xe~zi\\Tq" 00:13:21.204 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:21.204 06:39:25 -- target/invalid.sh@58 -- # gen_random_s 41 00:13:21.204 06:39:25 -- target/invalid.sh@19 -- # local length=41 ll 00:13:21.204 06:39:25 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:21.204 06:39:25 -- target/invalid.sh@21 -- # local chars 00:13:21.204 06:39:25 -- target/invalid.sh@22 -- # local string 00:13:21.204 06:39:25 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:21.204 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.204 06:39:25 -- target/invalid.sh@25 -- # printf %x 78 00:13:21.204 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:21.204 06:39:25 -- target/invalid.sh@25 -- # string+=N 00:13:21.204 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.204 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.204 06:39:25 -- target/invalid.sh@25 -- # printf %x 72 00:13:21.204 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:21.204 06:39:25 -- target/invalid.sh@25 -- # string+=H 00:13:21.204 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.204 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.204 06:39:25 -- target/invalid.sh@25 -- # printf %x 93 00:13:21.204 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:21.204 06:39:25 -- target/invalid.sh@25 -- # string+=']' 00:13:21.204 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.204 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.204 06:39:25 -- target/invalid.sh@25 -- # printf %x 34 00:13:21.204 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:21.204 06:39:25 -- target/invalid.sh@25 -- # string+='"' 00:13:21.204 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.204 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.204 06:39:25 -- target/invalid.sh@25 -- # printf %x 37 00:13:21.204 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:21.204 06:39:25 -- target/invalid.sh@25 -- # string+=% 00:13:21.204 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.204 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.204 06:39:25 -- target/invalid.sh@25 -- # printf %x 56 00:13:21.204 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:21.204 06:39:25 -- target/invalid.sh@25 -- # string+=8 00:13:21.204 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.204 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.204 06:39:25 -- target/invalid.sh@25 -- # printf %x 46 00:13:21.204 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:21.204 06:39:25 -- target/invalid.sh@25 -- # string+=. 00:13:21.204 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.204 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.204 06:39:25 -- target/invalid.sh@25 -- # printf %x 116 00:13:21.204 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:21.204 06:39:25 -- target/invalid.sh@25 -- # string+=t 00:13:21.204 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.204 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.204 06:39:25 -- target/invalid.sh@25 -- # printf %x 92 00:13:21.204 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:21.204 06:39:25 -- target/invalid.sh@25 -- # string+='\' 00:13:21.204 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.204 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.204 06:39:25 -- target/invalid.sh@25 -- # printf %x 113 00:13:21.204 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:21.204 06:39:25 -- target/invalid.sh@25 -- # string+=q 00:13:21.204 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.204 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.205 06:39:25 -- target/invalid.sh@25 -- # printf %x 63 00:13:21.205 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:21.205 06:39:25 -- target/invalid.sh@25 -- # string+='?' 00:13:21.205 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.205 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.205 06:39:25 -- target/invalid.sh@25 -- # printf %x 40 00:13:21.205 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:21.205 06:39:25 -- target/invalid.sh@25 -- # string+='(' 00:13:21.205 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.205 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.205 06:39:25 -- target/invalid.sh@25 -- # printf %x 111 00:13:21.205 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:21.205 06:39:25 -- target/invalid.sh@25 -- # string+=o 00:13:21.205 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.205 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.205 06:39:25 -- target/invalid.sh@25 -- # printf %x 69 00:13:21.205 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:21.205 06:39:25 -- target/invalid.sh@25 -- # string+=E 00:13:21.205 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.205 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.205 06:39:25 -- target/invalid.sh@25 -- # printf %x 90 00:13:21.205 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:21.205 06:39:25 -- target/invalid.sh@25 -- # string+=Z 00:13:21.205 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.205 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.205 06:39:25 -- target/invalid.sh@25 -- # printf %x 32 00:13:21.205 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:21.205 06:39:25 -- target/invalid.sh@25 -- # string+=' ' 00:13:21.205 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.205 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.205 06:39:25 -- target/invalid.sh@25 -- # printf %x 69 00:13:21.205 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:21.205 06:39:25 -- target/invalid.sh@25 -- # string+=E 00:13:21.205 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.205 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.205 06:39:25 -- target/invalid.sh@25 -- # printf %x 111 00:13:21.205 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:21.205 06:39:25 -- target/invalid.sh@25 -- # string+=o 00:13:21.205 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.205 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.205 06:39:25 -- target/invalid.sh@25 -- # printf %x 106 00:13:21.205 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:21.205 06:39:25 -- target/invalid.sh@25 -- # string+=j 00:13:21.205 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.205 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.205 06:39:25 -- target/invalid.sh@25 -- # printf %x 106 00:13:21.205 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:21.205 06:39:25 -- target/invalid.sh@25 -- # string+=j 00:13:21.205 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.205 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.205 06:39:25 -- target/invalid.sh@25 -- # printf %x 116 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # string+=t 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # printf %x 111 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # string+=o 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # printf %x 78 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # string+=N 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # printf %x 80 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # string+=P 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # printf %x 53 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # string+=5 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # printf %x 32 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # string+=' ' 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # printf %x 115 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # string+=s 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # printf %x 97 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # string+=a 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # printf %x 90 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # string+=Z 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # printf %x 104 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # string+=h 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # printf %x 40 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # string+='(' 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # printf %x 52 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # string+=4 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # printf %x 112 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # string+=p 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # printf %x 40 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # string+='(' 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # printf %x 116 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # string+=t 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # printf %x 69 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # string+=E 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # printf %x 74 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # string+=J 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # printf %x 85 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # string+=U 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # printf %x 95 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # string+=_ 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # printf %x 91 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # string+='[' 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # printf %x 67 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:21.463 06:39:25 -- target/invalid.sh@25 -- # string+=C 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.463 06:39:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.463 06:39:25 -- target/invalid.sh@28 -- # [[ N == \- ]] 00:13:21.463 06:39:25 -- target/invalid.sh@31 -- # echo 'NH]"%8.t\q?(oEZ EojjtoNP5 saZh(4p(tEJU_[C' 00:13:21.463 06:39:25 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'NH]"%8.t\q?(oEZ EojjtoNP5 saZh(4p(tEJU_[C' nqn.2016-06.io.spdk:cnode17757 00:13:21.721 [2024-04-17 06:39:26.090624] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17757: invalid model number 'NH]"%8.t\q?(oEZ EojjtoNP5 saZh(4p(tEJU_[C' 00:13:21.721 06:39:26 -- target/invalid.sh@58 -- # out='request: 00:13:21.721 { 00:13:21.721 "nqn": "nqn.2016-06.io.spdk:cnode17757", 00:13:21.721 "model_number": "NH]\"%8.t\\q?(oEZ EojjtoNP5 saZh(4p(tEJU_[C", 00:13:21.721 "method": "nvmf_create_subsystem", 00:13:21.721 "req_id": 1 00:13:21.721 } 00:13:21.721 Got JSON-RPC error response 00:13:21.721 response: 00:13:21.721 { 00:13:21.721 "code": -32602, 00:13:21.721 "message": "Invalid MN NH]\"%8.t\\q?(oEZ EojjtoNP5 saZh(4p(tEJU_[C" 00:13:21.721 }' 00:13:21.721 06:39:26 -- target/invalid.sh@59 -- # [[ request: 00:13:21.721 { 00:13:21.721 "nqn": "nqn.2016-06.io.spdk:cnode17757", 00:13:21.721 "model_number": "NH]\"%8.t\\q?(oEZ EojjtoNP5 saZh(4p(tEJU_[C", 00:13:21.721 "method": "nvmf_create_subsystem", 00:13:21.721 "req_id": 1 00:13:21.721 } 00:13:21.721 Got JSON-RPC error response 00:13:21.721 response: 00:13:21.721 { 00:13:21.721 "code": -32602, 00:13:21.721 "message": "Invalid MN NH]\"%8.t\\q?(oEZ EojjtoNP5 saZh(4p(tEJU_[C" 00:13:21.721 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:21.721 06:39:26 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:21.979 [2024-04-17 06:39:26.331523] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:21.979 06:39:26 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:22.237 06:39:26 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:22.237 06:39:26 -- target/invalid.sh@67 -- # echo '' 00:13:22.237 06:39:26 -- target/invalid.sh@67 -- # head -n 1 00:13:22.237 06:39:26 -- target/invalid.sh@67 -- # IP= 00:13:22.237 06:39:26 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:22.237 [2024-04-17 06:39:26.833132] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:22.496 06:39:26 -- target/invalid.sh@69 -- # out='request: 00:13:22.496 { 00:13:22.496 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:22.496 "listen_address": { 00:13:22.496 "trtype": "tcp", 00:13:22.496 "traddr": "", 00:13:22.496 "trsvcid": "4421" 00:13:22.496 }, 00:13:22.496 "method": "nvmf_subsystem_remove_listener", 00:13:22.496 "req_id": 1 00:13:22.496 } 00:13:22.496 Got JSON-RPC error response 00:13:22.496 response: 00:13:22.496 { 00:13:22.496 "code": -32602, 00:13:22.496 "message": "Invalid parameters" 00:13:22.496 }' 00:13:22.496 06:39:26 -- target/invalid.sh@70 -- # [[ request: 00:13:22.496 { 00:13:22.496 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:22.496 "listen_address": { 00:13:22.496 "trtype": "tcp", 00:13:22.496 "traddr": "", 00:13:22.496 "trsvcid": "4421" 00:13:22.496 }, 00:13:22.496 "method": "nvmf_subsystem_remove_listener", 00:13:22.496 "req_id": 1 00:13:22.496 } 00:13:22.496 Got JSON-RPC error response 00:13:22.496 response: 00:13:22.496 { 00:13:22.496 "code": -32602, 00:13:22.496 "message": "Invalid parameters" 00:13:22.496 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:22.496 06:39:26 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9832 -i 0 00:13:22.496 [2024-04-17 06:39:27.073894] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9832: invalid cntlid range [0-65519] 00:13:22.496 06:39:27 -- target/invalid.sh@73 -- # out='request: 00:13:22.496 { 00:13:22.496 "nqn": "nqn.2016-06.io.spdk:cnode9832", 00:13:22.496 "min_cntlid": 0, 00:13:22.496 "method": "nvmf_create_subsystem", 00:13:22.496 "req_id": 1 00:13:22.496 } 00:13:22.496 Got JSON-RPC error response 00:13:22.496 response: 00:13:22.496 { 00:13:22.496 "code": -32602, 00:13:22.496 "message": "Invalid cntlid range [0-65519]" 00:13:22.496 }' 00:13:22.496 06:39:27 -- target/invalid.sh@74 -- # [[ request: 00:13:22.496 { 00:13:22.496 "nqn": "nqn.2016-06.io.spdk:cnode9832", 00:13:22.496 "min_cntlid": 0, 00:13:22.496 "method": "nvmf_create_subsystem", 00:13:22.496 "req_id": 1 00:13:22.496 } 00:13:22.496 Got JSON-RPC error response 00:13:22.496 response: 00:13:22.496 { 00:13:22.496 "code": -32602, 00:13:22.496 "message": "Invalid cntlid range [0-65519]" 00:13:22.496 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:22.496 06:39:27 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30069 -i 65520 00:13:22.753 [2024-04-17 06:39:27.314646] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30069: invalid cntlid range [65520-65519] 00:13:22.753 06:39:27 -- target/invalid.sh@75 -- # out='request: 00:13:22.753 { 00:13:22.753 "nqn": "nqn.2016-06.io.spdk:cnode30069", 00:13:22.753 "min_cntlid": 65520, 00:13:22.753 "method": "nvmf_create_subsystem", 00:13:22.753 "req_id": 1 00:13:22.753 } 00:13:22.753 Got JSON-RPC error response 00:13:22.753 response: 00:13:22.753 { 00:13:22.753 "code": -32602, 00:13:22.753 "message": "Invalid cntlid range [65520-65519]" 00:13:22.753 }' 00:13:22.753 06:39:27 -- target/invalid.sh@76 -- # [[ request: 00:13:22.753 { 00:13:22.753 "nqn": "nqn.2016-06.io.spdk:cnode30069", 00:13:22.753 "min_cntlid": 65520, 00:13:22.753 "method": "nvmf_create_subsystem", 00:13:22.753 "req_id": 1 00:13:22.753 } 00:13:22.753 Got JSON-RPC error response 00:13:22.753 response: 00:13:22.753 { 00:13:22.753 "code": -32602, 00:13:22.753 "message": "Invalid cntlid range [65520-65519]" 00:13:22.753 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:22.753 06:39:27 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19813 -I 0 00:13:23.011 [2024-04-17 06:39:27.551432] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19813: invalid cntlid range [1-0] 00:13:23.011 06:39:27 -- target/invalid.sh@77 -- # out='request: 00:13:23.011 { 00:13:23.011 "nqn": "nqn.2016-06.io.spdk:cnode19813", 00:13:23.011 "max_cntlid": 0, 00:13:23.011 "method": "nvmf_create_subsystem", 00:13:23.011 "req_id": 1 00:13:23.011 } 00:13:23.011 Got JSON-RPC error response 00:13:23.011 response: 00:13:23.011 { 00:13:23.011 "code": -32602, 00:13:23.011 "message": "Invalid cntlid range [1-0]" 00:13:23.011 }' 00:13:23.011 06:39:27 -- target/invalid.sh@78 -- # [[ request: 00:13:23.011 { 00:13:23.011 "nqn": "nqn.2016-06.io.spdk:cnode19813", 00:13:23.011 "max_cntlid": 0, 00:13:23.011 "method": "nvmf_create_subsystem", 00:13:23.011 "req_id": 1 00:13:23.011 } 00:13:23.011 Got JSON-RPC error response 00:13:23.011 response: 00:13:23.011 { 00:13:23.011 "code": -32602, 00:13:23.011 "message": "Invalid cntlid range [1-0]" 00:13:23.011 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:23.011 06:39:27 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4064 -I 65520 00:13:23.269 [2024-04-17 06:39:27.784202] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4064: invalid cntlid range [1-65520] 00:13:23.269 06:39:27 -- target/invalid.sh@79 -- # out='request: 00:13:23.269 { 00:13:23.269 "nqn": "nqn.2016-06.io.spdk:cnode4064", 00:13:23.269 "max_cntlid": 65520, 00:13:23.269 "method": "nvmf_create_subsystem", 00:13:23.269 "req_id": 1 00:13:23.269 } 00:13:23.269 Got JSON-RPC error response 00:13:23.269 response: 00:13:23.269 { 00:13:23.269 "code": -32602, 00:13:23.269 "message": "Invalid cntlid range [1-65520]" 00:13:23.269 }' 00:13:23.269 06:39:27 -- target/invalid.sh@80 -- # [[ request: 00:13:23.269 { 00:13:23.269 "nqn": "nqn.2016-06.io.spdk:cnode4064", 00:13:23.269 "max_cntlid": 65520, 00:13:23.269 "method": "nvmf_create_subsystem", 00:13:23.269 "req_id": 1 00:13:23.269 } 00:13:23.269 Got JSON-RPC error response 00:13:23.269 response: 00:13:23.269 { 00:13:23.269 "code": -32602, 00:13:23.269 "message": "Invalid cntlid range [1-65520]" 00:13:23.269 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:23.269 06:39:27 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12731 -i 6 -I 5 00:13:23.527 [2024-04-17 06:39:28.041018] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12731: invalid cntlid range [6-5] 00:13:23.527 06:39:28 -- target/invalid.sh@83 -- # out='request: 00:13:23.527 { 00:13:23.527 "nqn": "nqn.2016-06.io.spdk:cnode12731", 00:13:23.527 "min_cntlid": 6, 00:13:23.527 "max_cntlid": 5, 00:13:23.527 "method": "nvmf_create_subsystem", 00:13:23.527 "req_id": 1 00:13:23.527 } 00:13:23.527 Got JSON-RPC error response 00:13:23.527 response: 00:13:23.527 { 00:13:23.527 "code": -32602, 00:13:23.527 "message": "Invalid cntlid range [6-5]" 00:13:23.527 }' 00:13:23.527 06:39:28 -- target/invalid.sh@84 -- # [[ request: 00:13:23.527 { 00:13:23.527 "nqn": "nqn.2016-06.io.spdk:cnode12731", 00:13:23.527 "min_cntlid": 6, 00:13:23.527 "max_cntlid": 5, 00:13:23.527 "method": "nvmf_create_subsystem", 00:13:23.527 "req_id": 1 00:13:23.527 } 00:13:23.527 Got JSON-RPC error response 00:13:23.527 response: 00:13:23.527 { 00:13:23.527 "code": -32602, 00:13:23.527 "message": "Invalid cntlid range [6-5]" 00:13:23.527 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:23.527 06:39:28 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:23.785 06:39:28 -- target/invalid.sh@87 -- # out='request: 00:13:23.785 { 00:13:23.785 "name": "foobar", 00:13:23.785 "method": "nvmf_delete_target", 00:13:23.785 "req_id": 1 00:13:23.785 } 00:13:23.785 Got JSON-RPC error response 00:13:23.785 response: 00:13:23.785 { 00:13:23.785 "code": -32602, 00:13:23.785 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:23.785 }' 00:13:23.785 06:39:28 -- target/invalid.sh@88 -- # [[ request: 00:13:23.785 { 00:13:23.785 "name": "foobar", 00:13:23.785 "method": "nvmf_delete_target", 00:13:23.785 "req_id": 1 00:13:23.785 } 00:13:23.785 Got JSON-RPC error response 00:13:23.785 response: 00:13:23.785 { 00:13:23.785 "code": -32602, 00:13:23.785 "message": "The specified target doesn't exist, cannot delete it." 00:13:23.785 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:23.785 06:39:28 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:23.785 06:39:28 -- target/invalid.sh@91 -- # nvmftestfini 00:13:23.785 06:39:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:23.785 06:39:28 -- nvmf/common.sh@117 -- # sync 00:13:23.785 06:39:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:23.785 06:39:28 -- nvmf/common.sh@120 -- # set +e 00:13:23.785 06:39:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:23.785 06:39:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:23.785 rmmod nvme_tcp 00:13:23.785 rmmod nvme_fabrics 00:13:23.785 rmmod nvme_keyring 00:13:23.785 06:39:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:23.785 06:39:28 -- nvmf/common.sh@124 -- # set -e 00:13:23.785 06:39:28 -- nvmf/common.sh@125 -- # return 0 00:13:23.785 06:39:28 -- nvmf/common.sh@478 -- # '[' -n 4128877 ']' 00:13:23.785 06:39:28 -- nvmf/common.sh@479 -- # killprocess 4128877 00:13:23.785 06:39:28 -- common/autotest_common.sh@936 -- # '[' -z 4128877 ']' 00:13:23.785 06:39:28 -- common/autotest_common.sh@940 -- # kill -0 4128877 00:13:23.785 06:39:28 -- common/autotest_common.sh@941 -- # uname 00:13:23.785 06:39:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:23.785 06:39:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4128877 00:13:23.785 06:39:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:23.785 06:39:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:23.785 06:39:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4128877' 00:13:23.785 killing process with pid 4128877 00:13:23.785 06:39:28 -- common/autotest_common.sh@955 -- # kill 4128877 00:13:23.785 06:39:28 -- common/autotest_common.sh@960 -- # wait 4128877 00:13:24.043 06:39:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:24.043 06:39:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:24.043 06:39:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:24.043 06:39:28 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:24.043 06:39:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:24.043 06:39:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.043 06:39:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:24.043 06:39:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.944 06:39:30 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:25.944 00:13:25.944 real 0m8.404s 00:13:25.944 user 0m19.261s 00:13:25.944 sys 0m2.340s 00:13:25.944 06:39:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:25.944 06:39:30 -- common/autotest_common.sh@10 -- # set +x 00:13:25.944 ************************************ 00:13:25.944 END TEST nvmf_invalid 00:13:25.944 ************************************ 00:13:26.202 06:39:30 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:26.202 06:39:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:26.202 06:39:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:26.202 06:39:30 -- common/autotest_common.sh@10 -- # set +x 00:13:26.202 ************************************ 00:13:26.202 START TEST nvmf_abort 00:13:26.202 ************************************ 00:13:26.202 06:39:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:26.202 * Looking for test storage... 00:13:26.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:26.202 06:39:30 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.202 06:39:30 -- nvmf/common.sh@7 -- # uname -s 00:13:26.202 06:39:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.202 06:39:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.202 06:39:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.202 06:39:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.202 06:39:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.202 06:39:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.202 06:39:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.202 06:39:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.202 06:39:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.202 06:39:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.202 06:39:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:26.202 06:39:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:26.202 06:39:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.202 06:39:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.202 06:39:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.202 06:39:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.202 06:39:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:26.202 06:39:30 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.202 06:39:30 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.202 06:39:30 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.202 06:39:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.202 06:39:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.203 06:39:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.203 06:39:30 -- paths/export.sh@5 -- # export PATH 00:13:26.203 06:39:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.203 06:39:30 -- nvmf/common.sh@47 -- # : 0 00:13:26.203 06:39:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:26.203 06:39:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:26.203 06:39:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.203 06:39:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.203 06:39:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.203 06:39:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:26.203 06:39:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:26.203 06:39:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:26.203 06:39:30 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:26.203 06:39:30 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:26.203 06:39:30 -- target/abort.sh@14 -- # nvmftestinit 00:13:26.203 06:39:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:26.203 06:39:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.203 06:39:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:26.203 06:39:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:26.203 06:39:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:26.203 06:39:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.203 06:39:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.203 06:39:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.203 06:39:30 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:26.203 06:39:30 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:26.203 06:39:30 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:26.203 06:39:30 -- common/autotest_common.sh@10 -- # set +x 00:13:28.733 06:39:32 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:28.733 06:39:32 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:28.733 06:39:32 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:28.733 06:39:32 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:28.733 06:39:32 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:28.733 06:39:32 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:28.733 06:39:32 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:28.733 06:39:32 -- nvmf/common.sh@295 -- # net_devs=() 00:13:28.733 06:39:32 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:28.733 06:39:32 -- nvmf/common.sh@296 -- # e810=() 00:13:28.733 06:39:32 -- nvmf/common.sh@296 -- # local -ga e810 00:13:28.733 06:39:32 -- nvmf/common.sh@297 -- # x722=() 00:13:28.733 06:39:32 -- nvmf/common.sh@297 -- # local -ga x722 00:13:28.733 06:39:32 -- nvmf/common.sh@298 -- # mlx=() 00:13:28.733 06:39:32 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:28.733 06:39:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:28.733 06:39:32 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:28.733 06:39:32 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:28.733 06:39:32 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:28.733 06:39:32 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:28.733 06:39:32 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:28.733 06:39:32 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:28.733 06:39:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:28.733 06:39:32 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:28.733 06:39:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:28.733 06:39:32 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:28.733 06:39:32 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:28.733 06:39:32 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:28.733 06:39:32 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:28.733 06:39:32 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:28.733 06:39:32 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:28.733 06:39:32 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:28.733 06:39:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:28.733 06:39:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:28.733 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:28.733 06:39:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:28.733 06:39:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:28.733 06:39:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.733 06:39:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.733 06:39:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:28.733 06:39:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:28.733 06:39:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:28.733 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:28.733 06:39:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:28.733 06:39:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:28.733 06:39:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.733 06:39:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.733 06:39:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:28.733 06:39:32 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:28.733 06:39:32 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:28.733 06:39:32 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:28.733 06:39:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:28.733 06:39:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.733 06:39:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:28.733 06:39:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.733 06:39:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:28.733 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:28.733 06:39:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.733 06:39:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:28.733 06:39:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.733 06:39:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:28.733 06:39:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.733 06:39:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:28.733 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:28.733 06:39:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.733 06:39:32 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:28.733 06:39:32 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:28.733 06:39:32 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:28.733 06:39:32 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:28.733 06:39:32 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:28.733 06:39:32 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:28.733 06:39:32 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:28.733 06:39:32 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:28.733 06:39:32 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:28.733 06:39:32 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:28.733 06:39:32 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:28.733 06:39:32 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:28.733 06:39:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:28.733 06:39:32 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:28.733 06:39:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:28.733 06:39:32 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:28.733 06:39:32 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:28.733 06:39:32 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:28.733 06:39:32 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:28.733 06:39:32 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:28.733 06:39:32 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:28.733 06:39:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:28.733 06:39:32 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:28.733 06:39:32 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:28.733 06:39:32 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:28.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:28.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:13:28.733 00:13:28.733 --- 10.0.0.2 ping statistics --- 00:13:28.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.733 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:13:28.734 06:39:32 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:28.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:28.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:13:28.734 00:13:28.734 --- 10.0.0.1 ping statistics --- 00:13:28.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.734 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:13:28.734 06:39:32 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:28.734 06:39:32 -- nvmf/common.sh@411 -- # return 0 00:13:28.734 06:39:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:28.734 06:39:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:28.734 06:39:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:28.734 06:39:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:28.734 06:39:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:28.734 06:39:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:28.734 06:39:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:28.734 06:39:32 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:28.734 06:39:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:28.734 06:39:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:28.734 06:39:32 -- common/autotest_common.sh@10 -- # set +x 00:13:28.734 06:39:32 -- nvmf/common.sh@470 -- # nvmfpid=4131517 00:13:28.734 06:39:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:28.734 06:39:32 -- nvmf/common.sh@471 -- # waitforlisten 4131517 00:13:28.734 06:39:32 -- common/autotest_common.sh@817 -- # '[' -z 4131517 ']' 00:13:28.734 06:39:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.734 06:39:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:28.734 06:39:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.734 06:39:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:28.734 06:39:32 -- common/autotest_common.sh@10 -- # set +x 00:13:28.734 [2024-04-17 06:39:32.923665] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:13:28.734 [2024-04-17 06:39:32.923748] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.734 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.734 [2024-04-17 06:39:32.994883] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:28.734 [2024-04-17 06:39:33.086853] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.734 [2024-04-17 06:39:33.086919] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.734 [2024-04-17 06:39:33.086935] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.734 [2024-04-17 06:39:33.086949] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.734 [2024-04-17 06:39:33.086960] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.734 [2024-04-17 06:39:33.087055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.734 [2024-04-17 06:39:33.087110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:28.734 [2024-04-17 06:39:33.087114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.734 06:39:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:28.734 06:39:33 -- common/autotest_common.sh@850 -- # return 0 00:13:28.734 06:39:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:28.734 06:39:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:28.734 06:39:33 -- common/autotest_common.sh@10 -- # set +x 00:13:28.734 06:39:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.734 06:39:33 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:28.734 06:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.734 06:39:33 -- common/autotest_common.sh@10 -- # set +x 00:13:28.734 [2024-04-17 06:39:33.238789] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:28.734 06:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.734 06:39:33 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:28.734 06:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.734 06:39:33 -- common/autotest_common.sh@10 -- # set +x 00:13:28.734 Malloc0 00:13:28.734 06:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.734 06:39:33 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:28.734 06:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.734 06:39:33 -- common/autotest_common.sh@10 -- # set +x 00:13:28.734 Delay0 00:13:28.734 06:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.734 06:39:33 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:28.734 06:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.734 06:39:33 -- common/autotest_common.sh@10 -- # set +x 00:13:28.734 06:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.734 06:39:33 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:28.734 06:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.734 06:39:33 -- common/autotest_common.sh@10 -- # set +x 00:13:28.734 06:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.734 06:39:33 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:28.734 06:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.734 06:39:33 -- common/autotest_common.sh@10 -- # set +x 00:13:28.734 [2024-04-17 06:39:33.306622] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:28.734 06:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.734 06:39:33 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:28.734 06:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.734 06:39:33 -- common/autotest_common.sh@10 -- # set +x 00:13:28.734 06:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.734 06:39:33 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:28.993 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.993 [2024-04-17 06:39:33.371688] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:31.522 Initializing NVMe Controllers 00:13:31.522 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:31.522 controller IO queue size 128 less than required 00:13:31.522 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:31.522 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:31.522 Initialization complete. Launching workers. 00:13:31.522 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 33806 00:13:31.522 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33871, failed to submit 62 00:13:31.522 success 33810, unsuccess 61, failed 0 00:13:31.522 06:39:35 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:31.522 06:39:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:31.522 06:39:35 -- common/autotest_common.sh@10 -- # set +x 00:13:31.522 06:39:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:31.522 06:39:35 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:31.522 06:39:35 -- target/abort.sh@38 -- # nvmftestfini 00:13:31.522 06:39:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:31.522 06:39:35 -- nvmf/common.sh@117 -- # sync 00:13:31.522 06:39:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:31.522 06:39:35 -- nvmf/common.sh@120 -- # set +e 00:13:31.522 06:39:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:31.522 06:39:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:31.522 rmmod nvme_tcp 00:13:31.522 rmmod nvme_fabrics 00:13:31.522 rmmod nvme_keyring 00:13:31.522 06:39:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:31.522 06:39:35 -- nvmf/common.sh@124 -- # set -e 00:13:31.522 06:39:35 -- nvmf/common.sh@125 -- # return 0 00:13:31.522 06:39:35 -- nvmf/common.sh@478 -- # '[' -n 4131517 ']' 00:13:31.522 06:39:35 -- nvmf/common.sh@479 -- # killprocess 4131517 00:13:31.522 06:39:35 -- common/autotest_common.sh@936 -- # '[' -z 4131517 ']' 00:13:31.522 06:39:35 -- common/autotest_common.sh@940 -- # kill -0 4131517 00:13:31.522 06:39:35 -- common/autotest_common.sh@941 -- # uname 00:13:31.522 06:39:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:31.522 06:39:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4131517 00:13:31.522 06:39:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:31.522 06:39:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:31.522 06:39:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4131517' 00:13:31.522 killing process with pid 4131517 00:13:31.522 06:39:35 -- common/autotest_common.sh@955 -- # kill 4131517 00:13:31.522 06:39:35 -- common/autotest_common.sh@960 -- # wait 4131517 00:13:31.522 06:39:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:31.522 06:39:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:31.522 06:39:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:31.522 06:39:35 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:31.522 06:39:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:31.522 06:39:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:31.522 06:39:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:31.522 06:39:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.429 06:39:37 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:33.429 00:13:33.429 real 0m7.254s 00:13:33.429 user 0m10.561s 00:13:33.429 sys 0m2.519s 00:13:33.429 06:39:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:33.429 06:39:37 -- common/autotest_common.sh@10 -- # set +x 00:13:33.429 ************************************ 00:13:33.429 END TEST nvmf_abort 00:13:33.429 ************************************ 00:13:33.429 06:39:37 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:33.429 06:39:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:33.429 06:39:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:33.429 06:39:37 -- common/autotest_common.sh@10 -- # set +x 00:13:33.429 ************************************ 00:13:33.429 START TEST nvmf_ns_hotplug_stress 00:13:33.429 ************************************ 00:13:33.688 06:39:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:33.688 * Looking for test storage... 00:13:33.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:33.688 06:39:38 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:33.688 06:39:38 -- nvmf/common.sh@7 -- # uname -s 00:13:33.688 06:39:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:33.688 06:39:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:33.688 06:39:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:33.688 06:39:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:33.688 06:39:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:33.688 06:39:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:33.688 06:39:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:33.688 06:39:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:33.688 06:39:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:33.688 06:39:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:33.688 06:39:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:33.688 06:39:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:33.688 06:39:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:33.688 06:39:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:33.688 06:39:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:33.688 06:39:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:33.688 06:39:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:33.688 06:39:38 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.688 06:39:38 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.688 06:39:38 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:33.688 06:39:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.688 06:39:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.688 06:39:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.688 06:39:38 -- paths/export.sh@5 -- # export PATH 00:13:33.688 06:39:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.688 06:39:38 -- nvmf/common.sh@47 -- # : 0 00:13:33.688 06:39:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:33.688 06:39:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:33.688 06:39:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:33.688 06:39:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:33.688 06:39:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:33.688 06:39:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:33.688 06:39:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:33.688 06:39:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:33.688 06:39:38 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:33.688 06:39:38 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:13:33.688 06:39:38 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:33.688 06:39:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:33.688 06:39:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:33.688 06:39:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:33.688 06:39:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:33.688 06:39:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.688 06:39:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:33.688 06:39:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.688 06:39:38 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:33.688 06:39:38 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:33.688 06:39:38 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:33.688 06:39:38 -- common/autotest_common.sh@10 -- # set +x 00:13:35.643 06:39:40 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:35.643 06:39:40 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:35.643 06:39:40 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:35.644 06:39:40 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:35.644 06:39:40 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:35.644 06:39:40 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:35.644 06:39:40 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:35.644 06:39:40 -- nvmf/common.sh@295 -- # net_devs=() 00:13:35.644 06:39:40 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:35.644 06:39:40 -- nvmf/common.sh@296 -- # e810=() 00:13:35.644 06:39:40 -- nvmf/common.sh@296 -- # local -ga e810 00:13:35.644 06:39:40 -- nvmf/common.sh@297 -- # x722=() 00:13:35.644 06:39:40 -- nvmf/common.sh@297 -- # local -ga x722 00:13:35.644 06:39:40 -- nvmf/common.sh@298 -- # mlx=() 00:13:35.644 06:39:40 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:35.644 06:39:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:35.644 06:39:40 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:35.644 06:39:40 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:35.644 06:39:40 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:35.644 06:39:40 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:35.644 06:39:40 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:35.644 06:39:40 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:35.644 06:39:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:35.644 06:39:40 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:35.644 06:39:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:35.644 06:39:40 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:35.644 06:39:40 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:35.644 06:39:40 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:35.644 06:39:40 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:35.644 06:39:40 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:35.644 06:39:40 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:35.644 06:39:40 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:35.644 06:39:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:35.644 06:39:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:35.644 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:35.644 06:39:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:35.644 06:39:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:35.644 06:39:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:35.644 06:39:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:35.644 06:39:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:35.644 06:39:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:35.644 06:39:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:35.644 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:35.644 06:39:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:35.644 06:39:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:35.644 06:39:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:35.644 06:39:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:35.644 06:39:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:35.644 06:39:40 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:35.644 06:39:40 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:35.644 06:39:40 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:35.644 06:39:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:35.644 06:39:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.644 06:39:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:35.644 06:39:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.644 06:39:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:35.644 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:35.644 06:39:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.644 06:39:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:35.644 06:39:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.644 06:39:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:35.644 06:39:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.644 06:39:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:35.644 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:35.644 06:39:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.644 06:39:40 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:35.644 06:39:40 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:35.644 06:39:40 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:35.644 06:39:40 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:35.644 06:39:40 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:35.644 06:39:40 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:35.644 06:39:40 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:35.644 06:39:40 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:35.644 06:39:40 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:35.644 06:39:40 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:35.644 06:39:40 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:35.644 06:39:40 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:35.644 06:39:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:35.644 06:39:40 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:35.644 06:39:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:35.644 06:39:40 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:35.644 06:39:40 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:35.644 06:39:40 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:35.902 06:39:40 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:35.902 06:39:40 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:35.902 06:39:40 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:35.902 06:39:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:35.902 06:39:40 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:35.902 06:39:40 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:35.902 06:39:40 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:35.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:35.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:13:35.902 00:13:35.902 --- 10.0.0.2 ping statistics --- 00:13:35.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.902 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:13:35.903 06:39:40 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:35.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:35.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:13:35.903 00:13:35.903 --- 10.0.0.1 ping statistics --- 00:13:35.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.903 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:13:35.903 06:39:40 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:35.903 06:39:40 -- nvmf/common.sh@411 -- # return 0 00:13:35.903 06:39:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:35.903 06:39:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:35.903 06:39:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:35.903 06:39:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:35.903 06:39:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:35.903 06:39:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:35.903 06:39:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:35.903 06:39:40 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:13:35.903 06:39:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:35.903 06:39:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:35.903 06:39:40 -- common/autotest_common.sh@10 -- # set +x 00:13:35.903 06:39:40 -- nvmf/common.sh@470 -- # nvmfpid=4133864 00:13:35.903 06:39:40 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:35.903 06:39:40 -- nvmf/common.sh@471 -- # waitforlisten 4133864 00:13:35.903 06:39:40 -- common/autotest_common.sh@817 -- # '[' -z 4133864 ']' 00:13:35.903 06:39:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.903 06:39:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:35.903 06:39:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.903 06:39:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:35.903 06:39:40 -- common/autotest_common.sh@10 -- # set +x 00:13:35.903 [2024-04-17 06:39:40.416438] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:13:35.903 [2024-04-17 06:39:40.416529] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.903 EAL: No free 2048 kB hugepages reported on node 1 00:13:35.903 [2024-04-17 06:39:40.486432] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:36.161 [2024-04-17 06:39:40.579705] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.161 [2024-04-17 06:39:40.579767] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.161 [2024-04-17 06:39:40.579791] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:36.161 [2024-04-17 06:39:40.579813] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:36.161 [2024-04-17 06:39:40.579825] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.161 [2024-04-17 06:39:40.579908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.161 [2024-04-17 06:39:40.579962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:36.161 [2024-04-17 06:39:40.579965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.161 06:39:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:36.161 06:39:40 -- common/autotest_common.sh@850 -- # return 0 00:13:36.161 06:39:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:36.161 06:39:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:36.161 06:39:40 -- common/autotest_common.sh@10 -- # set +x 00:13:36.161 06:39:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.161 06:39:40 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:13:36.161 06:39:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:36.418 [2024-04-17 06:39:40.952094] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:36.418 06:39:40 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:36.675 06:39:41 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.932 [2024-04-17 06:39:41.434827] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.932 06:39:41 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:37.190 06:39:41 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:37.447 Malloc0 00:13:37.447 06:39:41 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:37.704 Delay0 00:13:37.704 06:39:42 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.961 06:39:42 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:38.219 NULL1 00:13:38.219 06:39:42 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:38.477 06:39:42 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=4134161 00:13:38.477 06:39:42 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:38.477 06:39:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4134161 00:13:38.477 06:39:42 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.477 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.735 06:39:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.992 06:39:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:13:38.992 06:39:43 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:39.250 true 00:13:39.250 06:39:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4134161 00:13:39.250 06:39:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.506 06:39:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.764 06:39:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:13:39.764 06:39:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:40.021 true 00:13:40.021 06:39:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4134161 00:13:40.021 06:39:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.954 Read completed with error (sct=0, sc=11) 00:13:40.954 06:39:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:40.954 06:39:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:13:40.954 06:39:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:41.211 true 00:13:41.211 06:39:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4134161 00:13:41.211 06:39:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.469 06:39:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.726 06:39:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:13:41.726 06:39:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:41.983 true 00:13:41.983 06:39:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4134161 00:13:41.984 06:39:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.915 06:39:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:43.173 06:39:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:13:43.173 06:39:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:43.431 true 00:13:43.431 06:39:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4134161 00:13:43.431 06:39:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.689 06:39:48 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.946 06:39:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:13:43.946 06:39:48 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:44.204 true 00:13:44.204 06:39:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4134161 00:13:44.204 06:39:48 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.137 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.137 06:39:49 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.137 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.137 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.137 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.394 06:39:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:13:45.394 06:39:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:45.652 true 00:13:45.652 06:39:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4134161 00:13:45.652 06:39:50 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.909 06:39:50 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.167 06:39:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:13:46.167 06:39:50 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:46.424 true 00:13:46.424 06:39:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4134161 00:13:46.424 06:39:50 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.355 06:39:51 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.613 06:39:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:13:47.613 06:39:51 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:47.870 true 00:13:47.870 06:39:52 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4134161 00:13:47.870 06:39:52 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.127 06:39:52 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.386 06:39:52 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:13:48.386 06:39:52 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:48.386 true 00:13:48.386 06:39:52 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4134161 00:13:48.386 06:39:52 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.674 06:39:53 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.931 06:39:53 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:13:48.931 06:39:53 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:49.189 true 00:13:49.189 06:39:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4134161 00:13:49.189 06:39:53 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.560 06:39:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.560 06:39:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:13:50.560 06:39:55 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:50.818 true 00:13:50.818 06:39:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4134161 00:13:50.818 06:39:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.749 06:39:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.007 06:39:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:13:52.007 06:39:56 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:52.007 true 00:13:52.007 06:39:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4134161 00:13:52.007 06:39:56 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.264 06:39:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.522 06:39:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:13:52.522 06:39:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:52.779 true 00:13:52.779 06:39:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4134161 00:13:52.779 06:39:57 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.713 06:39:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.970 06:39:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:13:53.970 06:39:58 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:54.227 true 00:13:54.227 06:39:58 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4134161 00:13:54.227 06:39:58 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.485 06:39:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.742 06:39:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:13:54.742 06:39:59 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:54.742 true 00:13:54.742 06:39:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4134161 00:13:54.742 06:39:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.675 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.675 06:40:00 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.675 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.933 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.933 06:40:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:13:55.933 06:40:00 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:56.190 true 00:13:56.190 06:40:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4134161 00:13:56.190 06:40:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.448 06:40:00 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.707 06:40:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:13:56.707 06:40:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:56.964 true 00:13:56.964 06:40:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4134161 00:13:56.964 06:40:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:57.895 06:40:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.153 06:40:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:13:58.153 06:40:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:58.411 true 00:13:58.411 06:40:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4134161 00:13:58.411 06:40:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.669 06:40:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.926 06:40:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:13:58.926 06:40:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:59.184 true 00:13:59.184 06:40:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4134161 00:13:59.184 06:40:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:00.118 06:40:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.375 06:40:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:14:00.375 06:40:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:00.632 true 00:14:00.632 06:40:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4134161 00:14:00.632 06:40:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.894 06:40:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.150 06:40:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:14:01.150 06:40:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:01.407 true 00:14:01.407 06:40:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4134161 00:14:01.407 06:40:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.340 06:40:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.340 06:40:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:14:02.340 06:40:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:02.651 true 00:14:02.651 06:40:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4134161 00:14:02.651 06:40:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.909 06:40:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.166 06:40:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:14:03.166 06:40:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:03.484 true 00:14:03.484 06:40:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4134161 00:14:03.484 06:40:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.415 06:40:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.672 06:40:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:14:04.673 06:40:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:04.930 true 00:14:04.930 06:40:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4134161 00:14:04.930 06:40:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.187 06:40:09 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.445 06:40:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:14:05.445 06:40:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:05.445 true 00:14:05.702 06:40:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4134161 00:14:05.702 06:40:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.642 06:40:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:06.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:06.642 06:40:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:14:06.642 06:40:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:06.899 true 00:14:06.899 06:40:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4134161 00:14:06.900 06:40:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.157 06:40:11 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.413 06:40:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:14:07.413 06:40:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:07.670 true 00:14:07.670 06:40:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4134161 00:14:07.670 06:40:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.604 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:08.604 06:40:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.604 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:08.861 06:40:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:14:08.861 06:40:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:08.861 Initializing NVMe Controllers 00:14:08.861 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:08.861 Controller IO queue size 128, less than required. 00:14:08.861 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:08.861 Controller IO queue size 128, less than required. 00:14:08.861 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:08.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:08.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:08.861 Initialization complete. Launching workers. 00:14:08.861 ======================================================== 00:14:08.861 Latency(us) 00:14:08.861 Device Information : IOPS MiB/s Average min max 00:14:08.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 868.28 0.42 77790.97 2172.61 1013531.69 00:14:08.862 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10879.90 5.31 11730.60 3063.13 451183.81 00:14:08.862 ======================================================== 00:14:08.862 Total : 11748.18 5.74 16612.99 2172.61 1013531.69 00:14:08.862 00:14:09.120 true 00:14:09.120 06:40:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4134161 00:14:09.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (4134161) - No such process 00:14:09.120 06:40:13 -- target/ns_hotplug_stress.sh@44 -- # wait 4134161 00:14:09.120 06:40:13 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:09.120 06:40:13 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:14:09.120 06:40:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:09.120 06:40:13 -- nvmf/common.sh@117 -- # sync 00:14:09.120 06:40:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:09.120 06:40:13 -- nvmf/common.sh@120 -- # set +e 00:14:09.120 06:40:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:09.120 06:40:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:09.120 rmmod nvme_tcp 00:14:09.120 rmmod nvme_fabrics 00:14:09.120 rmmod nvme_keyring 00:14:09.120 06:40:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:09.120 06:40:13 -- nvmf/common.sh@124 -- # set -e 00:14:09.120 06:40:13 -- nvmf/common.sh@125 -- # return 0 00:14:09.120 06:40:13 -- nvmf/common.sh@478 -- # '[' -n 4133864 ']' 00:14:09.120 06:40:13 -- nvmf/common.sh@479 -- # killprocess 4133864 00:14:09.120 06:40:13 -- common/autotest_common.sh@936 -- # '[' -z 4133864 ']' 00:14:09.120 06:40:13 -- common/autotest_common.sh@940 -- # kill -0 4133864 00:14:09.120 06:40:13 -- common/autotest_common.sh@941 -- # uname 00:14:09.120 06:40:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:09.120 06:40:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4133864 00:14:09.120 06:40:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:09.120 06:40:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:09.120 06:40:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4133864' 00:14:09.120 killing process with pid 4133864 00:14:09.120 06:40:13 -- common/autotest_common.sh@955 -- # kill 4133864 00:14:09.120 06:40:13 -- common/autotest_common.sh@960 -- # wait 4133864 00:14:09.379 06:40:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:09.379 06:40:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:09.379 06:40:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:09.379 06:40:13 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:09.379 06:40:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:09.379 06:40:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.379 06:40:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.379 06:40:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.913 06:40:15 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:11.913 00:14:11.913 real 0m37.901s 00:14:11.913 user 2m27.125s 00:14:11.913 sys 0m10.143s 00:14:11.913 06:40:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:11.913 06:40:15 -- common/autotest_common.sh@10 -- # set +x 00:14:11.913 ************************************ 00:14:11.913 END TEST nvmf_ns_hotplug_stress 00:14:11.913 ************************************ 00:14:11.913 06:40:15 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:11.914 06:40:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:11.914 06:40:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:11.914 06:40:15 -- common/autotest_common.sh@10 -- # set +x 00:14:11.914 ************************************ 00:14:11.914 START TEST nvmf_connect_stress 00:14:11.914 ************************************ 00:14:11.914 06:40:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:11.914 * Looking for test storage... 00:14:11.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:11.914 06:40:16 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:11.914 06:40:16 -- nvmf/common.sh@7 -- # uname -s 00:14:11.914 06:40:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:11.914 06:40:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:11.914 06:40:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:11.914 06:40:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:11.914 06:40:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:11.914 06:40:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:11.914 06:40:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:11.914 06:40:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:11.914 06:40:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:11.914 06:40:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:11.914 06:40:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:11.914 06:40:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:11.914 06:40:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:11.914 06:40:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:11.914 06:40:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:11.914 06:40:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:11.914 06:40:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:11.914 06:40:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:11.914 06:40:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:11.914 06:40:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:11.914 06:40:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.914 06:40:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.914 06:40:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.914 06:40:16 -- paths/export.sh@5 -- # export PATH 00:14:11.914 06:40:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.914 06:40:16 -- nvmf/common.sh@47 -- # : 0 00:14:11.914 06:40:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:11.914 06:40:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:11.914 06:40:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:11.914 06:40:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:11.914 06:40:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:11.914 06:40:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:11.914 06:40:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:11.914 06:40:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:11.914 06:40:16 -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:11.914 06:40:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:11.914 06:40:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:11.914 06:40:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:11.914 06:40:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:11.914 06:40:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:11.914 06:40:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.914 06:40:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:11.914 06:40:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.914 06:40:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:11.914 06:40:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:11.914 06:40:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:11.914 06:40:16 -- common/autotest_common.sh@10 -- # set +x 00:14:13.816 06:40:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:13.816 06:40:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:13.816 06:40:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:13.816 06:40:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:13.816 06:40:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:13.816 06:40:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:13.816 06:40:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:13.816 06:40:18 -- nvmf/common.sh@295 -- # net_devs=() 00:14:13.816 06:40:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:13.816 06:40:18 -- nvmf/common.sh@296 -- # e810=() 00:14:13.816 06:40:18 -- nvmf/common.sh@296 -- # local -ga e810 00:14:13.816 06:40:18 -- nvmf/common.sh@297 -- # x722=() 00:14:13.816 06:40:18 -- nvmf/common.sh@297 -- # local -ga x722 00:14:13.816 06:40:18 -- nvmf/common.sh@298 -- # mlx=() 00:14:13.816 06:40:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:13.816 06:40:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:13.816 06:40:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:13.816 06:40:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:13.816 06:40:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:13.816 06:40:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:13.816 06:40:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:13.816 06:40:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:13.816 06:40:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:13.816 06:40:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:13.816 06:40:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:13.816 06:40:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:13.816 06:40:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:13.816 06:40:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:13.816 06:40:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:13.816 06:40:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:13.816 06:40:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:13.816 06:40:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:13.816 06:40:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:13.816 06:40:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:13.816 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:13.816 06:40:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:13.816 06:40:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:13.816 06:40:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:13.816 06:40:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:13.816 06:40:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:13.816 06:40:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:13.816 06:40:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:13.816 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:13.816 06:40:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:13.816 06:40:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:13.816 06:40:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:13.816 06:40:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:13.816 06:40:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:13.816 06:40:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:13.816 06:40:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:13.816 06:40:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:13.816 06:40:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:13.816 06:40:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:13.816 06:40:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:13.816 06:40:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:13.816 06:40:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:13.816 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:13.816 06:40:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:13.816 06:40:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:13.816 06:40:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:13.816 06:40:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:13.816 06:40:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:13.816 06:40:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:13.816 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:13.816 06:40:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:13.816 06:40:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:13.816 06:40:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:13.816 06:40:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:13.816 06:40:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:13.816 06:40:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:13.816 06:40:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:13.816 06:40:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:13.816 06:40:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:13.816 06:40:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:13.816 06:40:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:13.816 06:40:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:13.816 06:40:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:13.816 06:40:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:13.816 06:40:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:13.816 06:40:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:13.816 06:40:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:13.816 06:40:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:13.816 06:40:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:13.816 06:40:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:13.816 06:40:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:13.816 06:40:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:13.816 06:40:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:13.816 06:40:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:13.816 06:40:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:13.816 06:40:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:13.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:13.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:14:13.816 00:14:13.816 --- 10.0.0.2 ping statistics --- 00:14:13.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.816 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:14:13.816 06:40:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:13.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:13.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:14:13.816 00:14:13.816 --- 10.0.0.1 ping statistics --- 00:14:13.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.816 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:14:13.816 06:40:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:13.816 06:40:18 -- nvmf/common.sh@411 -- # return 0 00:14:13.816 06:40:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:13.816 06:40:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:13.816 06:40:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:13.816 06:40:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:13.816 06:40:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:13.816 06:40:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:13.816 06:40:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:13.817 06:40:18 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:13.817 06:40:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:13.817 06:40:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:13.817 06:40:18 -- common/autotest_common.sh@10 -- # set +x 00:14:13.817 06:40:18 -- nvmf/common.sh@470 -- # nvmfpid=4139752 00:14:13.817 06:40:18 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:13.817 06:40:18 -- nvmf/common.sh@471 -- # waitforlisten 4139752 00:14:13.817 06:40:18 -- common/autotest_common.sh@817 -- # '[' -z 4139752 ']' 00:14:13.817 06:40:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.817 06:40:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:13.817 06:40:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.817 06:40:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:13.817 06:40:18 -- common/autotest_common.sh@10 -- # set +x 00:14:13.817 [2024-04-17 06:40:18.251832] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:14:13.817 [2024-04-17 06:40:18.251931] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.817 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.817 [2024-04-17 06:40:18.323983] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:13.817 [2024-04-17 06:40:18.414599] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.817 [2024-04-17 06:40:18.414666] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.817 [2024-04-17 06:40:18.414690] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:13.817 [2024-04-17 06:40:18.414705] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:13.817 [2024-04-17 06:40:18.414716] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.817 [2024-04-17 06:40:18.414824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:13.817 [2024-04-17 06:40:18.414927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:13.817 [2024-04-17 06:40:18.414930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.075 06:40:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:14.075 06:40:18 -- common/autotest_common.sh@850 -- # return 0 00:14:14.075 06:40:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:14.075 06:40:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:14.075 06:40:18 -- common/autotest_common.sh@10 -- # set +x 00:14:14.075 06:40:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.075 06:40:18 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:14.075 06:40:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:14.075 06:40:18 -- common/autotest_common.sh@10 -- # set +x 00:14:14.075 [2024-04-17 06:40:18.557257] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:14.075 06:40:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:14.075 06:40:18 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:14.075 06:40:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:14.075 06:40:18 -- common/autotest_common.sh@10 -- # set +x 00:14:14.075 06:40:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:14.075 06:40:18 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:14.075 06:40:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:14.075 06:40:18 -- common/autotest_common.sh@10 -- # set +x 00:14:14.075 [2024-04-17 06:40:18.591330] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.075 06:40:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:14.075 06:40:18 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:14.075 06:40:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:14.075 06:40:18 -- common/autotest_common.sh@10 -- # set +x 00:14:14.075 NULL1 00:14:14.076 06:40:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:14.076 06:40:18 -- target/connect_stress.sh@21 -- # PERF_PID=4139893 00:14:14.076 06:40:18 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:14.076 06:40:18 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:14.076 06:40:18 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:14.076 06:40:18 -- target/connect_stress.sh@27 -- # seq 1 20 00:14:14.076 06:40:18 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.076 06:40:18 -- target/connect_stress.sh@28 -- # cat 00:14:14.076 06:40:18 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.076 06:40:18 -- target/connect_stress.sh@28 -- # cat 00:14:14.076 06:40:18 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.076 06:40:18 -- target/connect_stress.sh@28 -- # cat 00:14:14.076 06:40:18 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.076 06:40:18 -- target/connect_stress.sh@28 -- # cat 00:14:14.076 06:40:18 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.076 06:40:18 -- target/connect_stress.sh@28 -- # cat 00:14:14.076 06:40:18 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.076 06:40:18 -- target/connect_stress.sh@28 -- # cat 00:14:14.076 06:40:18 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.076 06:40:18 -- target/connect_stress.sh@28 -- # cat 00:14:14.076 06:40:18 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.076 06:40:18 -- target/connect_stress.sh@28 -- # cat 00:14:14.076 06:40:18 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.076 06:40:18 -- target/connect_stress.sh@28 -- # cat 00:14:14.076 06:40:18 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.076 06:40:18 -- target/connect_stress.sh@28 -- # cat 00:14:14.076 06:40:18 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.076 06:40:18 -- target/connect_stress.sh@28 -- # cat 00:14:14.076 06:40:18 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.076 06:40:18 -- target/connect_stress.sh@28 -- # cat 00:14:14.076 EAL: No free 2048 kB hugepages reported on node 1 00:14:14.076 06:40:18 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.076 06:40:18 -- target/connect_stress.sh@28 -- # cat 00:14:14.076 06:40:18 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.076 06:40:18 -- target/connect_stress.sh@28 -- # cat 00:14:14.076 06:40:18 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.076 06:40:18 -- target/connect_stress.sh@28 -- # cat 00:14:14.076 06:40:18 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.076 06:40:18 -- target/connect_stress.sh@28 -- # cat 00:14:14.076 06:40:18 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.076 06:40:18 -- target/connect_stress.sh@28 -- # cat 00:14:14.076 06:40:18 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.076 06:40:18 -- target/connect_stress.sh@28 -- # cat 00:14:14.076 06:40:18 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.076 06:40:18 -- target/connect_stress.sh@28 -- # cat 00:14:14.076 06:40:18 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.076 06:40:18 -- target/connect_stress.sh@28 -- # cat 00:14:14.076 06:40:18 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:14.076 06:40:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.076 06:40:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:14.076 06:40:18 -- common/autotest_common.sh@10 -- # set +x 00:14:14.642 06:40:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:14.642 06:40:18 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:14.642 06:40:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.642 06:40:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:14.642 06:40:18 -- common/autotest_common.sh@10 -- # set +x 00:14:14.899 06:40:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:14.899 06:40:19 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:14.899 06:40:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.899 06:40:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:14.899 06:40:19 -- common/autotest_common.sh@10 -- # set +x 00:14:15.156 06:40:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:15.156 06:40:19 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:15.157 06:40:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.157 06:40:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:15.157 06:40:19 -- common/autotest_common.sh@10 -- # set +x 00:14:15.414 06:40:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:15.414 06:40:19 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:15.414 06:40:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.414 06:40:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:15.414 06:40:19 -- common/autotest_common.sh@10 -- # set +x 00:14:15.672 06:40:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:15.672 06:40:20 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:15.672 06:40:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.672 06:40:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:15.672 06:40:20 -- common/autotest_common.sh@10 -- # set +x 00:14:16.237 06:40:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:16.237 06:40:20 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:16.237 06:40:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.237 06:40:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:16.237 06:40:20 -- common/autotest_common.sh@10 -- # set +x 00:14:16.495 06:40:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:16.495 06:40:20 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:16.495 06:40:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.495 06:40:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:16.495 06:40:20 -- common/autotest_common.sh@10 -- # set +x 00:14:16.752 06:40:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:16.752 06:40:21 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:16.752 06:40:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.752 06:40:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:16.752 06:40:21 -- common/autotest_common.sh@10 -- # set +x 00:14:17.009 06:40:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:17.009 06:40:21 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:17.009 06:40:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.009 06:40:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:17.009 06:40:21 -- common/autotest_common.sh@10 -- # set +x 00:14:17.298 06:40:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:17.298 06:40:21 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:17.298 06:40:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.298 06:40:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:17.298 06:40:21 -- common/autotest_common.sh@10 -- # set +x 00:14:17.862 06:40:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:17.862 06:40:22 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:17.862 06:40:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.862 06:40:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:17.862 06:40:22 -- common/autotest_common.sh@10 -- # set +x 00:14:18.119 06:40:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:18.119 06:40:22 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:18.119 06:40:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.119 06:40:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:18.119 06:40:22 -- common/autotest_common.sh@10 -- # set +x 00:14:18.377 06:40:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:18.377 06:40:22 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:18.377 06:40:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.377 06:40:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:18.377 06:40:22 -- common/autotest_common.sh@10 -- # set +x 00:14:18.634 06:40:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:18.634 06:40:23 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:18.634 06:40:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.634 06:40:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:18.634 06:40:23 -- common/autotest_common.sh@10 -- # set +x 00:14:18.892 06:40:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:18.892 06:40:23 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:18.892 06:40:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.892 06:40:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:18.892 06:40:23 -- common/autotest_common.sh@10 -- # set +x 00:14:19.456 06:40:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:19.456 06:40:23 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:19.456 06:40:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.456 06:40:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:19.456 06:40:23 -- common/autotest_common.sh@10 -- # set +x 00:14:19.714 06:40:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:19.714 06:40:24 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:19.714 06:40:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.714 06:40:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:19.714 06:40:24 -- common/autotest_common.sh@10 -- # set +x 00:14:19.972 06:40:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:19.972 06:40:24 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:19.972 06:40:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.972 06:40:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:19.972 06:40:24 -- common/autotest_common.sh@10 -- # set +x 00:14:20.230 06:40:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:20.230 06:40:24 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:20.230 06:40:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.230 06:40:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:20.230 06:40:24 -- common/autotest_common.sh@10 -- # set +x 00:14:20.488 06:40:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:20.488 06:40:25 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:20.488 06:40:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.488 06:40:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:20.488 06:40:25 -- common/autotest_common.sh@10 -- # set +x 00:14:21.053 06:40:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:21.053 06:40:25 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:21.054 06:40:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.054 06:40:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:21.054 06:40:25 -- common/autotest_common.sh@10 -- # set +x 00:14:21.311 06:40:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:21.311 06:40:25 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:21.311 06:40:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.311 06:40:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:21.311 06:40:25 -- common/autotest_common.sh@10 -- # set +x 00:14:21.569 06:40:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:21.569 06:40:26 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:21.569 06:40:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.569 06:40:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:21.569 06:40:26 -- common/autotest_common.sh@10 -- # set +x 00:14:21.828 06:40:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:21.828 06:40:26 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:21.828 06:40:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.828 06:40:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:21.828 06:40:26 -- common/autotest_common.sh@10 -- # set +x 00:14:22.085 06:40:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:22.085 06:40:26 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:22.085 06:40:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.085 06:40:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:22.085 06:40:26 -- common/autotest_common.sh@10 -- # set +x 00:14:22.649 06:40:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:22.649 06:40:27 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:22.649 06:40:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.649 06:40:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:22.649 06:40:27 -- common/autotest_common.sh@10 -- # set +x 00:14:22.907 06:40:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:22.907 06:40:27 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:22.907 06:40:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.907 06:40:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:22.907 06:40:27 -- common/autotest_common.sh@10 -- # set +x 00:14:23.164 06:40:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:23.164 06:40:27 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:23.164 06:40:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.164 06:40:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:23.164 06:40:27 -- common/autotest_common.sh@10 -- # set +x 00:14:23.429 06:40:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:23.429 06:40:27 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:23.429 06:40:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.429 06:40:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:23.429 06:40:27 -- common/autotest_common.sh@10 -- # set +x 00:14:23.687 06:40:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:23.687 06:40:28 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:23.687 06:40:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.687 06:40:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:23.687 06:40:28 -- common/autotest_common.sh@10 -- # set +x 00:14:24.253 06:40:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:24.253 06:40:28 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:24.253 06:40:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.253 06:40:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:24.253 06:40:28 -- common/autotest_common.sh@10 -- # set +x 00:14:24.253 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:24.511 06:40:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:24.511 06:40:28 -- target/connect_stress.sh@34 -- # kill -0 4139893 00:14:24.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (4139893) - No such process 00:14:24.511 06:40:28 -- target/connect_stress.sh@38 -- # wait 4139893 00:14:24.511 06:40:28 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:24.511 06:40:28 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:24.511 06:40:28 -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:24.511 06:40:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:24.511 06:40:28 -- nvmf/common.sh@117 -- # sync 00:14:24.511 06:40:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:24.511 06:40:28 -- nvmf/common.sh@120 -- # set +e 00:14:24.511 06:40:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:24.511 06:40:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:24.511 rmmod nvme_tcp 00:14:24.511 rmmod nvme_fabrics 00:14:24.511 rmmod nvme_keyring 00:14:24.511 06:40:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:24.511 06:40:28 -- nvmf/common.sh@124 -- # set -e 00:14:24.511 06:40:28 -- nvmf/common.sh@125 -- # return 0 00:14:24.511 06:40:28 -- nvmf/common.sh@478 -- # '[' -n 4139752 ']' 00:14:24.511 06:40:28 -- nvmf/common.sh@479 -- # killprocess 4139752 00:14:24.511 06:40:28 -- common/autotest_common.sh@936 -- # '[' -z 4139752 ']' 00:14:24.511 06:40:28 -- common/autotest_common.sh@940 -- # kill -0 4139752 00:14:24.511 06:40:28 -- common/autotest_common.sh@941 -- # uname 00:14:24.511 06:40:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:24.511 06:40:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4139752 00:14:24.511 06:40:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:24.511 06:40:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:24.511 06:40:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4139752' 00:14:24.511 killing process with pid 4139752 00:14:24.511 06:40:29 -- common/autotest_common.sh@955 -- # kill 4139752 00:14:24.511 06:40:29 -- common/autotest_common.sh@960 -- # wait 4139752 00:14:24.770 06:40:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:24.770 06:40:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:24.770 06:40:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:24.770 06:40:29 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:24.770 06:40:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:24.770 06:40:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.770 06:40:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:24.770 06:40:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.675 06:40:31 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:26.675 00:14:26.675 real 0m15.205s 00:14:26.675 user 0m37.939s 00:14:26.675 sys 0m5.990s 00:14:26.675 06:40:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:26.675 06:40:31 -- common/autotest_common.sh@10 -- # set +x 00:14:26.675 ************************************ 00:14:26.675 END TEST nvmf_connect_stress 00:14:26.675 ************************************ 00:14:26.934 06:40:31 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:26.934 06:40:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:26.934 06:40:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:26.934 06:40:31 -- common/autotest_common.sh@10 -- # set +x 00:14:26.934 ************************************ 00:14:26.934 START TEST nvmf_fused_ordering 00:14:26.934 ************************************ 00:14:26.934 06:40:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:26.934 * Looking for test storage... 00:14:26.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:26.934 06:40:31 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:26.934 06:40:31 -- nvmf/common.sh@7 -- # uname -s 00:14:26.934 06:40:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:26.934 06:40:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:26.934 06:40:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:26.934 06:40:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:26.934 06:40:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:26.934 06:40:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:26.934 06:40:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:26.934 06:40:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:26.934 06:40:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:26.934 06:40:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:26.934 06:40:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:26.934 06:40:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:26.934 06:40:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:26.934 06:40:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:26.934 06:40:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:26.934 06:40:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:26.934 06:40:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:26.934 06:40:31 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:26.934 06:40:31 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:26.934 06:40:31 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:26.934 06:40:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.934 06:40:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.934 06:40:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.934 06:40:31 -- paths/export.sh@5 -- # export PATH 00:14:26.934 06:40:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.934 06:40:31 -- nvmf/common.sh@47 -- # : 0 00:14:26.934 06:40:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:26.935 06:40:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:26.935 06:40:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:26.935 06:40:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:26.935 06:40:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:26.935 06:40:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:26.935 06:40:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:26.935 06:40:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:26.935 06:40:31 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:26.935 06:40:31 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:26.935 06:40:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:26.935 06:40:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:26.935 06:40:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:26.935 06:40:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:26.935 06:40:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.935 06:40:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.935 06:40:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.935 06:40:31 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:26.935 06:40:31 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:26.935 06:40:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:26.935 06:40:31 -- common/autotest_common.sh@10 -- # set +x 00:14:28.839 06:40:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:28.839 06:40:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:28.839 06:40:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:28.839 06:40:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:28.839 06:40:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:28.839 06:40:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:28.839 06:40:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:28.839 06:40:33 -- nvmf/common.sh@295 -- # net_devs=() 00:14:28.839 06:40:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:28.839 06:40:33 -- nvmf/common.sh@296 -- # e810=() 00:14:28.839 06:40:33 -- nvmf/common.sh@296 -- # local -ga e810 00:14:28.839 06:40:33 -- nvmf/common.sh@297 -- # x722=() 00:14:28.839 06:40:33 -- nvmf/common.sh@297 -- # local -ga x722 00:14:28.839 06:40:33 -- nvmf/common.sh@298 -- # mlx=() 00:14:28.839 06:40:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:28.839 06:40:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:28.839 06:40:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:28.839 06:40:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:28.839 06:40:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:28.839 06:40:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:28.839 06:40:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:28.839 06:40:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:28.839 06:40:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:28.839 06:40:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:28.839 06:40:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:28.839 06:40:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:28.839 06:40:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:28.839 06:40:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:28.839 06:40:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:28.839 06:40:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:28.839 06:40:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:28.839 06:40:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:28.839 06:40:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:28.839 06:40:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:28.839 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:28.839 06:40:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:28.839 06:40:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:28.839 06:40:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:28.839 06:40:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:28.839 06:40:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:28.839 06:40:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:28.839 06:40:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:28.839 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:28.839 06:40:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:28.839 06:40:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:28.839 06:40:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:28.839 06:40:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:28.839 06:40:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:28.839 06:40:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:28.839 06:40:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:28.839 06:40:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:28.839 06:40:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:28.839 06:40:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:28.839 06:40:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:28.839 06:40:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:28.840 06:40:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:28.840 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:28.840 06:40:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:28.840 06:40:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:28.840 06:40:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:28.840 06:40:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:28.840 06:40:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:28.840 06:40:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:28.840 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:28.840 06:40:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:28.840 06:40:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:28.840 06:40:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:28.840 06:40:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:28.840 06:40:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:28.840 06:40:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:28.840 06:40:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:28.840 06:40:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:28.840 06:40:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:28.840 06:40:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:28.840 06:40:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:28.840 06:40:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:28.840 06:40:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:28.840 06:40:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:28.840 06:40:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:28.840 06:40:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:28.840 06:40:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:28.840 06:40:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:28.840 06:40:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:29.099 06:40:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:29.099 06:40:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:29.099 06:40:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:29.099 06:40:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:29.099 06:40:33 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:29.099 06:40:33 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:29.099 06:40:33 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:29.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:29.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:14:29.099 00:14:29.099 --- 10.0.0.2 ping statistics --- 00:14:29.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.099 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:14:29.099 06:40:33 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:29.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:29.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:14:29.099 00:14:29.099 --- 10.0.0.1 ping statistics --- 00:14:29.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.099 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:14:29.099 06:40:33 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:29.099 06:40:33 -- nvmf/common.sh@411 -- # return 0 00:14:29.099 06:40:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:29.099 06:40:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:29.099 06:40:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:29.099 06:40:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:29.099 06:40:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:29.099 06:40:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:29.099 06:40:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:29.099 06:40:33 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:29.099 06:40:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:29.099 06:40:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:29.099 06:40:33 -- common/autotest_common.sh@10 -- # set +x 00:14:29.099 06:40:33 -- nvmf/common.sh@470 -- # nvmfpid=4143052 00:14:29.099 06:40:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:29.099 06:40:33 -- nvmf/common.sh@471 -- # waitforlisten 4143052 00:14:29.099 06:40:33 -- common/autotest_common.sh@817 -- # '[' -z 4143052 ']' 00:14:29.099 06:40:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.099 06:40:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:29.099 06:40:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.099 06:40:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:29.099 06:40:33 -- common/autotest_common.sh@10 -- # set +x 00:14:29.099 [2024-04-17 06:40:33.618529] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:14:29.099 [2024-04-17 06:40:33.618603] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.099 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.099 [2024-04-17 06:40:33.682951] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.358 [2024-04-17 06:40:33.770415] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.358 [2024-04-17 06:40:33.770496] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.358 [2024-04-17 06:40:33.770509] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:29.358 [2024-04-17 06:40:33.770534] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:29.358 [2024-04-17 06:40:33.770544] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.358 [2024-04-17 06:40:33.770577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.358 06:40:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:29.358 06:40:33 -- common/autotest_common.sh@850 -- # return 0 00:14:29.358 06:40:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:29.358 06:40:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:29.358 06:40:33 -- common/autotest_common.sh@10 -- # set +x 00:14:29.358 06:40:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:29.358 06:40:33 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:29.358 06:40:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:29.358 06:40:33 -- common/autotest_common.sh@10 -- # set +x 00:14:29.358 [2024-04-17 06:40:33.921361] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:29.358 06:40:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:29.358 06:40:33 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:29.358 06:40:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:29.358 06:40:33 -- common/autotest_common.sh@10 -- # set +x 00:14:29.358 06:40:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:29.358 06:40:33 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.358 06:40:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:29.358 06:40:33 -- common/autotest_common.sh@10 -- # set +x 00:14:29.358 [2024-04-17 06:40:33.937552] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.358 06:40:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:29.358 06:40:33 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:29.358 06:40:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:29.358 06:40:33 -- common/autotest_common.sh@10 -- # set +x 00:14:29.358 NULL1 00:14:29.358 06:40:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:29.358 06:40:33 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:29.358 06:40:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:29.358 06:40:33 -- common/autotest_common.sh@10 -- # set +x 00:14:29.358 06:40:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:29.358 06:40:33 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:29.358 06:40:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:29.358 06:40:33 -- common/autotest_common.sh@10 -- # set +x 00:14:29.616 06:40:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:29.616 06:40:33 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:29.616 [2024-04-17 06:40:33.982065] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:14:29.616 [2024-04-17 06:40:33.982108] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4143071 ] 00:14:29.616 EAL: No free 2048 kB hugepages reported on node 1 00:14:30.182 Attached to nqn.2016-06.io.spdk:cnode1 00:14:30.182 Namespace ID: 1 size: 1GB 00:14:30.182 fused_ordering(0) 00:14:30.182 fused_ordering(1) 00:14:30.182 fused_ordering(2) 00:14:30.182 fused_ordering(3) 00:14:30.182 fused_ordering(4) 00:14:30.182 fused_ordering(5) 00:14:30.182 fused_ordering(6) 00:14:30.182 fused_ordering(7) 00:14:30.182 fused_ordering(8) 00:14:30.182 fused_ordering(9) 00:14:30.182 fused_ordering(10) 00:14:30.182 fused_ordering(11) 00:14:30.182 fused_ordering(12) 00:14:30.182 fused_ordering(13) 00:14:30.182 fused_ordering(14) 00:14:30.182 fused_ordering(15) 00:14:30.182 fused_ordering(16) 00:14:30.182 fused_ordering(17) 00:14:30.182 fused_ordering(18) 00:14:30.182 fused_ordering(19) 00:14:30.182 fused_ordering(20) 00:14:30.182 fused_ordering(21) 00:14:30.182 fused_ordering(22) 00:14:30.182 fused_ordering(23) 00:14:30.182 fused_ordering(24) 00:14:30.182 fused_ordering(25) 00:14:30.182 fused_ordering(26) 00:14:30.182 fused_ordering(27) 00:14:30.182 fused_ordering(28) 00:14:30.182 fused_ordering(29) 00:14:30.182 fused_ordering(30) 00:14:30.182 fused_ordering(31) 00:14:30.182 fused_ordering(32) 00:14:30.182 fused_ordering(33) 00:14:30.182 fused_ordering(34) 00:14:30.182 fused_ordering(35) 00:14:30.182 fused_ordering(36) 00:14:30.182 fused_ordering(37) 00:14:30.182 fused_ordering(38) 00:14:30.182 fused_ordering(39) 00:14:30.182 fused_ordering(40) 00:14:30.182 fused_ordering(41) 00:14:30.182 fused_ordering(42) 00:14:30.182 fused_ordering(43) 00:14:30.182 fused_ordering(44) 00:14:30.182 fused_ordering(45) 00:14:30.182 fused_ordering(46) 00:14:30.182 fused_ordering(47) 00:14:30.182 fused_ordering(48) 00:14:30.182 fused_ordering(49) 00:14:30.182 fused_ordering(50) 00:14:30.182 fused_ordering(51) 00:14:30.182 fused_ordering(52) 00:14:30.182 fused_ordering(53) 00:14:30.182 fused_ordering(54) 00:14:30.182 fused_ordering(55) 00:14:30.182 fused_ordering(56) 00:14:30.182 fused_ordering(57) 00:14:30.182 fused_ordering(58) 00:14:30.182 fused_ordering(59) 00:14:30.182 fused_ordering(60) 00:14:30.182 fused_ordering(61) 00:14:30.182 fused_ordering(62) 00:14:30.182 fused_ordering(63) 00:14:30.182 fused_ordering(64) 00:14:30.182 fused_ordering(65) 00:14:30.182 fused_ordering(66) 00:14:30.182 fused_ordering(67) 00:14:30.182 fused_ordering(68) 00:14:30.182 fused_ordering(69) 00:14:30.182 fused_ordering(70) 00:14:30.182 fused_ordering(71) 00:14:30.182 fused_ordering(72) 00:14:30.182 fused_ordering(73) 00:14:30.182 fused_ordering(74) 00:14:30.182 fused_ordering(75) 00:14:30.182 fused_ordering(76) 00:14:30.182 fused_ordering(77) 00:14:30.182 fused_ordering(78) 00:14:30.182 fused_ordering(79) 00:14:30.182 fused_ordering(80) 00:14:30.182 fused_ordering(81) 00:14:30.182 fused_ordering(82) 00:14:30.182 fused_ordering(83) 00:14:30.182 fused_ordering(84) 00:14:30.182 fused_ordering(85) 00:14:30.182 fused_ordering(86) 00:14:30.182 fused_ordering(87) 00:14:30.182 fused_ordering(88) 00:14:30.182 fused_ordering(89) 00:14:30.182 fused_ordering(90) 00:14:30.182 fused_ordering(91) 00:14:30.182 fused_ordering(92) 00:14:30.182 fused_ordering(93) 00:14:30.182 fused_ordering(94) 00:14:30.182 fused_ordering(95) 00:14:30.182 fused_ordering(96) 00:14:30.182 fused_ordering(97) 00:14:30.182 fused_ordering(98) 00:14:30.182 fused_ordering(99) 00:14:30.182 fused_ordering(100) 00:14:30.182 fused_ordering(101) 00:14:30.182 fused_ordering(102) 00:14:30.182 fused_ordering(103) 00:14:30.182 fused_ordering(104) 00:14:30.182 fused_ordering(105) 00:14:30.182 fused_ordering(106) 00:14:30.182 fused_ordering(107) 00:14:30.182 fused_ordering(108) 00:14:30.182 fused_ordering(109) 00:14:30.182 fused_ordering(110) 00:14:30.182 fused_ordering(111) 00:14:30.182 fused_ordering(112) 00:14:30.182 fused_ordering(113) 00:14:30.182 fused_ordering(114) 00:14:30.182 fused_ordering(115) 00:14:30.182 fused_ordering(116) 00:14:30.182 fused_ordering(117) 00:14:30.182 fused_ordering(118) 00:14:30.182 fused_ordering(119) 00:14:30.182 fused_ordering(120) 00:14:30.182 fused_ordering(121) 00:14:30.182 fused_ordering(122) 00:14:30.182 fused_ordering(123) 00:14:30.182 fused_ordering(124) 00:14:30.182 fused_ordering(125) 00:14:30.182 fused_ordering(126) 00:14:30.182 fused_ordering(127) 00:14:30.182 fused_ordering(128) 00:14:30.182 fused_ordering(129) 00:14:30.182 fused_ordering(130) 00:14:30.182 fused_ordering(131) 00:14:30.182 fused_ordering(132) 00:14:30.182 fused_ordering(133) 00:14:30.182 fused_ordering(134) 00:14:30.182 fused_ordering(135) 00:14:30.182 fused_ordering(136) 00:14:30.182 fused_ordering(137) 00:14:30.182 fused_ordering(138) 00:14:30.182 fused_ordering(139) 00:14:30.182 fused_ordering(140) 00:14:30.182 fused_ordering(141) 00:14:30.182 fused_ordering(142) 00:14:30.182 fused_ordering(143) 00:14:30.182 fused_ordering(144) 00:14:30.182 fused_ordering(145) 00:14:30.182 fused_ordering(146) 00:14:30.182 fused_ordering(147) 00:14:30.182 fused_ordering(148) 00:14:30.182 fused_ordering(149) 00:14:30.182 fused_ordering(150) 00:14:30.182 fused_ordering(151) 00:14:30.182 fused_ordering(152) 00:14:30.182 fused_ordering(153) 00:14:30.182 fused_ordering(154) 00:14:30.182 fused_ordering(155) 00:14:30.182 fused_ordering(156) 00:14:30.182 fused_ordering(157) 00:14:30.182 fused_ordering(158) 00:14:30.182 fused_ordering(159) 00:14:30.182 fused_ordering(160) 00:14:30.182 fused_ordering(161) 00:14:30.182 fused_ordering(162) 00:14:30.182 fused_ordering(163) 00:14:30.182 fused_ordering(164) 00:14:30.182 fused_ordering(165) 00:14:30.182 fused_ordering(166) 00:14:30.182 fused_ordering(167) 00:14:30.182 fused_ordering(168) 00:14:30.182 fused_ordering(169) 00:14:30.182 fused_ordering(170) 00:14:30.182 fused_ordering(171) 00:14:30.182 fused_ordering(172) 00:14:30.182 fused_ordering(173) 00:14:30.182 fused_ordering(174) 00:14:30.182 fused_ordering(175) 00:14:30.182 fused_ordering(176) 00:14:30.182 fused_ordering(177) 00:14:30.182 fused_ordering(178) 00:14:30.182 fused_ordering(179) 00:14:30.182 fused_ordering(180) 00:14:30.182 fused_ordering(181) 00:14:30.182 fused_ordering(182) 00:14:30.182 fused_ordering(183) 00:14:30.182 fused_ordering(184) 00:14:30.182 fused_ordering(185) 00:14:30.182 fused_ordering(186) 00:14:30.182 fused_ordering(187) 00:14:30.182 fused_ordering(188) 00:14:30.182 fused_ordering(189) 00:14:30.182 fused_ordering(190) 00:14:30.182 fused_ordering(191) 00:14:30.182 fused_ordering(192) 00:14:30.182 fused_ordering(193) 00:14:30.182 fused_ordering(194) 00:14:30.182 fused_ordering(195) 00:14:30.182 fused_ordering(196) 00:14:30.182 fused_ordering(197) 00:14:30.182 fused_ordering(198) 00:14:30.182 fused_ordering(199) 00:14:30.182 fused_ordering(200) 00:14:30.182 fused_ordering(201) 00:14:30.182 fused_ordering(202) 00:14:30.182 fused_ordering(203) 00:14:30.182 fused_ordering(204) 00:14:30.182 fused_ordering(205) 00:14:30.441 fused_ordering(206) 00:14:30.441 fused_ordering(207) 00:14:30.441 fused_ordering(208) 00:14:30.441 fused_ordering(209) 00:14:30.441 fused_ordering(210) 00:14:30.441 fused_ordering(211) 00:14:30.441 fused_ordering(212) 00:14:30.441 fused_ordering(213) 00:14:30.441 fused_ordering(214) 00:14:30.441 fused_ordering(215) 00:14:30.441 fused_ordering(216) 00:14:30.441 fused_ordering(217) 00:14:30.441 fused_ordering(218) 00:14:30.441 fused_ordering(219) 00:14:30.441 fused_ordering(220) 00:14:30.441 fused_ordering(221) 00:14:30.441 fused_ordering(222) 00:14:30.441 fused_ordering(223) 00:14:30.441 fused_ordering(224) 00:14:30.441 fused_ordering(225) 00:14:30.441 fused_ordering(226) 00:14:30.441 fused_ordering(227) 00:14:30.441 fused_ordering(228) 00:14:30.441 fused_ordering(229) 00:14:30.441 fused_ordering(230) 00:14:30.441 fused_ordering(231) 00:14:30.441 fused_ordering(232) 00:14:30.441 fused_ordering(233) 00:14:30.441 fused_ordering(234) 00:14:30.441 fused_ordering(235) 00:14:30.441 fused_ordering(236) 00:14:30.441 fused_ordering(237) 00:14:30.441 fused_ordering(238) 00:14:30.441 fused_ordering(239) 00:14:30.441 fused_ordering(240) 00:14:30.441 fused_ordering(241) 00:14:30.441 fused_ordering(242) 00:14:30.441 fused_ordering(243) 00:14:30.441 fused_ordering(244) 00:14:30.441 fused_ordering(245) 00:14:30.441 fused_ordering(246) 00:14:30.441 fused_ordering(247) 00:14:30.441 fused_ordering(248) 00:14:30.441 fused_ordering(249) 00:14:30.441 fused_ordering(250) 00:14:30.441 fused_ordering(251) 00:14:30.441 fused_ordering(252) 00:14:30.441 fused_ordering(253) 00:14:30.441 fused_ordering(254) 00:14:30.441 fused_ordering(255) 00:14:30.441 fused_ordering(256) 00:14:30.441 fused_ordering(257) 00:14:30.441 fused_ordering(258) 00:14:30.441 fused_ordering(259) 00:14:30.441 fused_ordering(260) 00:14:30.441 fused_ordering(261) 00:14:30.441 fused_ordering(262) 00:14:30.441 fused_ordering(263) 00:14:30.441 fused_ordering(264) 00:14:30.441 fused_ordering(265) 00:14:30.441 fused_ordering(266) 00:14:30.441 fused_ordering(267) 00:14:30.441 fused_ordering(268) 00:14:30.441 fused_ordering(269) 00:14:30.441 fused_ordering(270) 00:14:30.441 fused_ordering(271) 00:14:30.441 fused_ordering(272) 00:14:30.441 fused_ordering(273) 00:14:30.441 fused_ordering(274) 00:14:30.441 fused_ordering(275) 00:14:30.441 fused_ordering(276) 00:14:30.441 fused_ordering(277) 00:14:30.441 fused_ordering(278) 00:14:30.441 fused_ordering(279) 00:14:30.441 fused_ordering(280) 00:14:30.441 fused_ordering(281) 00:14:30.441 fused_ordering(282) 00:14:30.441 fused_ordering(283) 00:14:30.441 fused_ordering(284) 00:14:30.441 fused_ordering(285) 00:14:30.441 fused_ordering(286) 00:14:30.441 fused_ordering(287) 00:14:30.441 fused_ordering(288) 00:14:30.441 fused_ordering(289) 00:14:30.441 fused_ordering(290) 00:14:30.441 fused_ordering(291) 00:14:30.441 fused_ordering(292) 00:14:30.441 fused_ordering(293) 00:14:30.441 fused_ordering(294) 00:14:30.441 fused_ordering(295) 00:14:30.441 fused_ordering(296) 00:14:30.441 fused_ordering(297) 00:14:30.441 fused_ordering(298) 00:14:30.441 fused_ordering(299) 00:14:30.441 fused_ordering(300) 00:14:30.441 fused_ordering(301) 00:14:30.441 fused_ordering(302) 00:14:30.441 fused_ordering(303) 00:14:30.441 fused_ordering(304) 00:14:30.441 fused_ordering(305) 00:14:30.441 fused_ordering(306) 00:14:30.441 fused_ordering(307) 00:14:30.441 fused_ordering(308) 00:14:30.441 fused_ordering(309) 00:14:30.441 fused_ordering(310) 00:14:30.441 fused_ordering(311) 00:14:30.441 fused_ordering(312) 00:14:30.441 fused_ordering(313) 00:14:30.441 fused_ordering(314) 00:14:30.441 fused_ordering(315) 00:14:30.441 fused_ordering(316) 00:14:30.441 fused_ordering(317) 00:14:30.441 fused_ordering(318) 00:14:30.441 fused_ordering(319) 00:14:30.441 fused_ordering(320) 00:14:30.441 fused_ordering(321) 00:14:30.441 fused_ordering(322) 00:14:30.441 fused_ordering(323) 00:14:30.441 fused_ordering(324) 00:14:30.441 fused_ordering(325) 00:14:30.441 fused_ordering(326) 00:14:30.441 fused_ordering(327) 00:14:30.441 fused_ordering(328) 00:14:30.441 fused_ordering(329) 00:14:30.441 fused_ordering(330) 00:14:30.441 fused_ordering(331) 00:14:30.441 fused_ordering(332) 00:14:30.441 fused_ordering(333) 00:14:30.441 fused_ordering(334) 00:14:30.441 fused_ordering(335) 00:14:30.441 fused_ordering(336) 00:14:30.441 fused_ordering(337) 00:14:30.441 fused_ordering(338) 00:14:30.441 fused_ordering(339) 00:14:30.441 fused_ordering(340) 00:14:30.441 fused_ordering(341) 00:14:30.441 fused_ordering(342) 00:14:30.441 fused_ordering(343) 00:14:30.441 fused_ordering(344) 00:14:30.441 fused_ordering(345) 00:14:30.441 fused_ordering(346) 00:14:30.441 fused_ordering(347) 00:14:30.441 fused_ordering(348) 00:14:30.441 fused_ordering(349) 00:14:30.441 fused_ordering(350) 00:14:30.441 fused_ordering(351) 00:14:30.441 fused_ordering(352) 00:14:30.441 fused_ordering(353) 00:14:30.441 fused_ordering(354) 00:14:30.441 fused_ordering(355) 00:14:30.441 fused_ordering(356) 00:14:30.441 fused_ordering(357) 00:14:30.441 fused_ordering(358) 00:14:30.441 fused_ordering(359) 00:14:30.441 fused_ordering(360) 00:14:30.441 fused_ordering(361) 00:14:30.441 fused_ordering(362) 00:14:30.441 fused_ordering(363) 00:14:30.441 fused_ordering(364) 00:14:30.441 fused_ordering(365) 00:14:30.441 fused_ordering(366) 00:14:30.441 fused_ordering(367) 00:14:30.441 fused_ordering(368) 00:14:30.441 fused_ordering(369) 00:14:30.441 fused_ordering(370) 00:14:30.441 fused_ordering(371) 00:14:30.441 fused_ordering(372) 00:14:30.441 fused_ordering(373) 00:14:30.441 fused_ordering(374) 00:14:30.441 fused_ordering(375) 00:14:30.441 fused_ordering(376) 00:14:30.441 fused_ordering(377) 00:14:30.441 fused_ordering(378) 00:14:30.441 fused_ordering(379) 00:14:30.441 fused_ordering(380) 00:14:30.441 fused_ordering(381) 00:14:30.441 fused_ordering(382) 00:14:30.441 fused_ordering(383) 00:14:30.441 fused_ordering(384) 00:14:30.441 fused_ordering(385) 00:14:30.441 fused_ordering(386) 00:14:30.441 fused_ordering(387) 00:14:30.441 fused_ordering(388) 00:14:30.441 fused_ordering(389) 00:14:30.441 fused_ordering(390) 00:14:30.441 fused_ordering(391) 00:14:30.441 fused_ordering(392) 00:14:30.441 fused_ordering(393) 00:14:30.441 fused_ordering(394) 00:14:30.441 fused_ordering(395) 00:14:30.441 fused_ordering(396) 00:14:30.441 fused_ordering(397) 00:14:30.441 fused_ordering(398) 00:14:30.441 fused_ordering(399) 00:14:30.441 fused_ordering(400) 00:14:30.441 fused_ordering(401) 00:14:30.441 fused_ordering(402) 00:14:30.441 fused_ordering(403) 00:14:30.441 fused_ordering(404) 00:14:30.441 fused_ordering(405) 00:14:30.441 fused_ordering(406) 00:14:30.441 fused_ordering(407) 00:14:30.441 fused_ordering(408) 00:14:30.441 fused_ordering(409) 00:14:30.441 fused_ordering(410) 00:14:31.007 fused_ordering(411) 00:14:31.007 fused_ordering(412) 00:14:31.007 fused_ordering(413) 00:14:31.007 fused_ordering(414) 00:14:31.007 fused_ordering(415) 00:14:31.007 fused_ordering(416) 00:14:31.007 fused_ordering(417) 00:14:31.007 fused_ordering(418) 00:14:31.007 fused_ordering(419) 00:14:31.007 fused_ordering(420) 00:14:31.007 fused_ordering(421) 00:14:31.007 fused_ordering(422) 00:14:31.007 fused_ordering(423) 00:14:31.007 fused_ordering(424) 00:14:31.007 fused_ordering(425) 00:14:31.007 fused_ordering(426) 00:14:31.007 fused_ordering(427) 00:14:31.007 fused_ordering(428) 00:14:31.007 fused_ordering(429) 00:14:31.007 fused_ordering(430) 00:14:31.007 fused_ordering(431) 00:14:31.007 fused_ordering(432) 00:14:31.007 fused_ordering(433) 00:14:31.007 fused_ordering(434) 00:14:31.007 fused_ordering(435) 00:14:31.007 fused_ordering(436) 00:14:31.007 fused_ordering(437) 00:14:31.007 fused_ordering(438) 00:14:31.007 fused_ordering(439) 00:14:31.007 fused_ordering(440) 00:14:31.007 fused_ordering(441) 00:14:31.007 fused_ordering(442) 00:14:31.007 fused_ordering(443) 00:14:31.007 fused_ordering(444) 00:14:31.007 fused_ordering(445) 00:14:31.007 fused_ordering(446) 00:14:31.007 fused_ordering(447) 00:14:31.007 fused_ordering(448) 00:14:31.007 fused_ordering(449) 00:14:31.007 fused_ordering(450) 00:14:31.007 fused_ordering(451) 00:14:31.007 fused_ordering(452) 00:14:31.007 fused_ordering(453) 00:14:31.007 fused_ordering(454) 00:14:31.007 fused_ordering(455) 00:14:31.007 fused_ordering(456) 00:14:31.007 fused_ordering(457) 00:14:31.007 fused_ordering(458) 00:14:31.007 fused_ordering(459) 00:14:31.008 fused_ordering(460) 00:14:31.008 fused_ordering(461) 00:14:31.008 fused_ordering(462) 00:14:31.008 fused_ordering(463) 00:14:31.008 fused_ordering(464) 00:14:31.008 fused_ordering(465) 00:14:31.008 fused_ordering(466) 00:14:31.008 fused_ordering(467) 00:14:31.008 fused_ordering(468) 00:14:31.008 fused_ordering(469) 00:14:31.008 fused_ordering(470) 00:14:31.008 fused_ordering(471) 00:14:31.008 fused_ordering(472) 00:14:31.008 fused_ordering(473) 00:14:31.008 fused_ordering(474) 00:14:31.008 fused_ordering(475) 00:14:31.008 fused_ordering(476) 00:14:31.008 fused_ordering(477) 00:14:31.008 fused_ordering(478) 00:14:31.008 fused_ordering(479) 00:14:31.008 fused_ordering(480) 00:14:31.008 fused_ordering(481) 00:14:31.008 fused_ordering(482) 00:14:31.008 fused_ordering(483) 00:14:31.008 fused_ordering(484) 00:14:31.008 fused_ordering(485) 00:14:31.008 fused_ordering(486) 00:14:31.008 fused_ordering(487) 00:14:31.008 fused_ordering(488) 00:14:31.008 fused_ordering(489) 00:14:31.008 fused_ordering(490) 00:14:31.008 fused_ordering(491) 00:14:31.008 fused_ordering(492) 00:14:31.008 fused_ordering(493) 00:14:31.008 fused_ordering(494) 00:14:31.008 fused_ordering(495) 00:14:31.008 fused_ordering(496) 00:14:31.008 fused_ordering(497) 00:14:31.008 fused_ordering(498) 00:14:31.008 fused_ordering(499) 00:14:31.008 fused_ordering(500) 00:14:31.008 fused_ordering(501) 00:14:31.008 fused_ordering(502) 00:14:31.008 fused_ordering(503) 00:14:31.008 fused_ordering(504) 00:14:31.008 fused_ordering(505) 00:14:31.008 fused_ordering(506) 00:14:31.008 fused_ordering(507) 00:14:31.008 fused_ordering(508) 00:14:31.008 fused_ordering(509) 00:14:31.008 fused_ordering(510) 00:14:31.008 fused_ordering(511) 00:14:31.008 fused_ordering(512) 00:14:31.008 fused_ordering(513) 00:14:31.008 fused_ordering(514) 00:14:31.008 fused_ordering(515) 00:14:31.008 fused_ordering(516) 00:14:31.008 fused_ordering(517) 00:14:31.008 fused_ordering(518) 00:14:31.008 fused_ordering(519) 00:14:31.008 fused_ordering(520) 00:14:31.008 fused_ordering(521) 00:14:31.008 fused_ordering(522) 00:14:31.008 fused_ordering(523) 00:14:31.008 fused_ordering(524) 00:14:31.008 fused_ordering(525) 00:14:31.008 fused_ordering(526) 00:14:31.008 fused_ordering(527) 00:14:31.008 fused_ordering(528) 00:14:31.008 fused_ordering(529) 00:14:31.008 fused_ordering(530) 00:14:31.008 fused_ordering(531) 00:14:31.008 fused_ordering(532) 00:14:31.008 fused_ordering(533) 00:14:31.008 fused_ordering(534) 00:14:31.008 fused_ordering(535) 00:14:31.008 fused_ordering(536) 00:14:31.008 fused_ordering(537) 00:14:31.008 fused_ordering(538) 00:14:31.008 fused_ordering(539) 00:14:31.008 fused_ordering(540) 00:14:31.008 fused_ordering(541) 00:14:31.008 fused_ordering(542) 00:14:31.008 fused_ordering(543) 00:14:31.008 fused_ordering(544) 00:14:31.008 fused_ordering(545) 00:14:31.008 fused_ordering(546) 00:14:31.008 fused_ordering(547) 00:14:31.008 fused_ordering(548) 00:14:31.008 fused_ordering(549) 00:14:31.008 fused_ordering(550) 00:14:31.008 fused_ordering(551) 00:14:31.008 fused_ordering(552) 00:14:31.008 fused_ordering(553) 00:14:31.008 fused_ordering(554) 00:14:31.008 fused_ordering(555) 00:14:31.008 fused_ordering(556) 00:14:31.008 fused_ordering(557) 00:14:31.008 fused_ordering(558) 00:14:31.008 fused_ordering(559) 00:14:31.008 fused_ordering(560) 00:14:31.008 fused_ordering(561) 00:14:31.008 fused_ordering(562) 00:14:31.008 fused_ordering(563) 00:14:31.008 fused_ordering(564) 00:14:31.008 fused_ordering(565) 00:14:31.008 fused_ordering(566) 00:14:31.008 fused_ordering(567) 00:14:31.008 fused_ordering(568) 00:14:31.008 fused_ordering(569) 00:14:31.008 fused_ordering(570) 00:14:31.008 fused_ordering(571) 00:14:31.008 fused_ordering(572) 00:14:31.008 fused_ordering(573) 00:14:31.008 fused_ordering(574) 00:14:31.008 fused_ordering(575) 00:14:31.008 fused_ordering(576) 00:14:31.008 fused_ordering(577) 00:14:31.008 fused_ordering(578) 00:14:31.008 fused_ordering(579) 00:14:31.008 fused_ordering(580) 00:14:31.008 fused_ordering(581) 00:14:31.008 fused_ordering(582) 00:14:31.008 fused_ordering(583) 00:14:31.008 fused_ordering(584) 00:14:31.008 fused_ordering(585) 00:14:31.008 fused_ordering(586) 00:14:31.008 fused_ordering(587) 00:14:31.008 fused_ordering(588) 00:14:31.008 fused_ordering(589) 00:14:31.008 fused_ordering(590) 00:14:31.008 fused_ordering(591) 00:14:31.008 fused_ordering(592) 00:14:31.008 fused_ordering(593) 00:14:31.008 fused_ordering(594) 00:14:31.008 fused_ordering(595) 00:14:31.008 fused_ordering(596) 00:14:31.008 fused_ordering(597) 00:14:31.008 fused_ordering(598) 00:14:31.008 fused_ordering(599) 00:14:31.008 fused_ordering(600) 00:14:31.008 fused_ordering(601) 00:14:31.008 fused_ordering(602) 00:14:31.008 fused_ordering(603) 00:14:31.008 fused_ordering(604) 00:14:31.008 fused_ordering(605) 00:14:31.008 fused_ordering(606) 00:14:31.008 fused_ordering(607) 00:14:31.008 fused_ordering(608) 00:14:31.008 fused_ordering(609) 00:14:31.008 fused_ordering(610) 00:14:31.008 fused_ordering(611) 00:14:31.008 fused_ordering(612) 00:14:31.008 fused_ordering(613) 00:14:31.008 fused_ordering(614) 00:14:31.008 fused_ordering(615) 00:14:31.975 fused_ordering(616) 00:14:31.975 fused_ordering(617) 00:14:31.975 fused_ordering(618) 00:14:31.975 fused_ordering(619) 00:14:31.975 fused_ordering(620) 00:14:31.975 fused_ordering(621) 00:14:31.975 fused_ordering(622) 00:14:31.975 fused_ordering(623) 00:14:31.975 fused_ordering(624) 00:14:31.975 fused_ordering(625) 00:14:31.975 fused_ordering(626) 00:14:31.975 fused_ordering(627) 00:14:31.975 fused_ordering(628) 00:14:31.975 fused_ordering(629) 00:14:31.975 fused_ordering(630) 00:14:31.975 fused_ordering(631) 00:14:31.975 fused_ordering(632) 00:14:31.975 fused_ordering(633) 00:14:31.975 fused_ordering(634) 00:14:31.975 fused_ordering(635) 00:14:31.975 fused_ordering(636) 00:14:31.975 fused_ordering(637) 00:14:31.975 fused_ordering(638) 00:14:31.975 fused_ordering(639) 00:14:31.975 fused_ordering(640) 00:14:31.975 fused_ordering(641) 00:14:31.975 fused_ordering(642) 00:14:31.975 fused_ordering(643) 00:14:31.975 fused_ordering(644) 00:14:31.975 fused_ordering(645) 00:14:31.975 fused_ordering(646) 00:14:31.975 fused_ordering(647) 00:14:31.975 fused_ordering(648) 00:14:31.975 fused_ordering(649) 00:14:31.975 fused_ordering(650) 00:14:31.975 fused_ordering(651) 00:14:31.975 fused_ordering(652) 00:14:31.975 fused_ordering(653) 00:14:31.975 fused_ordering(654) 00:14:31.975 fused_ordering(655) 00:14:31.975 fused_ordering(656) 00:14:31.975 fused_ordering(657) 00:14:31.975 fused_ordering(658) 00:14:31.975 fused_ordering(659) 00:14:31.975 fused_ordering(660) 00:14:31.975 fused_ordering(661) 00:14:31.975 fused_ordering(662) 00:14:31.975 fused_ordering(663) 00:14:31.975 fused_ordering(664) 00:14:31.975 fused_ordering(665) 00:14:31.975 fused_ordering(666) 00:14:31.975 fused_ordering(667) 00:14:31.975 fused_ordering(668) 00:14:31.975 fused_ordering(669) 00:14:31.975 fused_ordering(670) 00:14:31.975 fused_ordering(671) 00:14:31.975 fused_ordering(672) 00:14:31.975 fused_ordering(673) 00:14:31.975 fused_ordering(674) 00:14:31.975 fused_ordering(675) 00:14:31.975 fused_ordering(676) 00:14:31.975 fused_ordering(677) 00:14:31.975 fused_ordering(678) 00:14:31.975 fused_ordering(679) 00:14:31.975 fused_ordering(680) 00:14:31.975 fused_ordering(681) 00:14:31.975 fused_ordering(682) 00:14:31.975 fused_ordering(683) 00:14:31.975 fused_ordering(684) 00:14:31.975 fused_ordering(685) 00:14:31.975 fused_ordering(686) 00:14:31.975 fused_ordering(687) 00:14:31.975 fused_ordering(688) 00:14:31.975 fused_ordering(689) 00:14:31.975 fused_ordering(690) 00:14:31.975 fused_ordering(691) 00:14:31.975 fused_ordering(692) 00:14:31.975 fused_ordering(693) 00:14:31.975 fused_ordering(694) 00:14:31.975 fused_ordering(695) 00:14:31.975 fused_ordering(696) 00:14:31.975 fused_ordering(697) 00:14:31.975 fused_ordering(698) 00:14:31.975 fused_ordering(699) 00:14:31.975 fused_ordering(700) 00:14:31.975 fused_ordering(701) 00:14:31.975 fused_ordering(702) 00:14:31.975 fused_ordering(703) 00:14:31.975 fused_ordering(704) 00:14:31.975 fused_ordering(705) 00:14:31.975 fused_ordering(706) 00:14:31.975 fused_ordering(707) 00:14:31.975 fused_ordering(708) 00:14:31.975 fused_ordering(709) 00:14:31.975 fused_ordering(710) 00:14:31.975 fused_ordering(711) 00:14:31.975 fused_ordering(712) 00:14:31.975 fused_ordering(713) 00:14:31.975 fused_ordering(714) 00:14:31.975 fused_ordering(715) 00:14:31.975 fused_ordering(716) 00:14:31.975 fused_ordering(717) 00:14:31.975 fused_ordering(718) 00:14:31.975 fused_ordering(719) 00:14:31.975 fused_ordering(720) 00:14:31.975 fused_ordering(721) 00:14:31.975 fused_ordering(722) 00:14:31.975 fused_ordering(723) 00:14:31.975 fused_ordering(724) 00:14:31.975 fused_ordering(725) 00:14:31.975 fused_ordering(726) 00:14:31.975 fused_ordering(727) 00:14:31.975 fused_ordering(728) 00:14:31.975 fused_ordering(729) 00:14:31.975 fused_ordering(730) 00:14:31.975 fused_ordering(731) 00:14:31.975 fused_ordering(732) 00:14:31.975 fused_ordering(733) 00:14:31.975 fused_ordering(734) 00:14:31.975 fused_ordering(735) 00:14:31.975 fused_ordering(736) 00:14:31.975 fused_ordering(737) 00:14:31.975 fused_ordering(738) 00:14:31.975 fused_ordering(739) 00:14:31.975 fused_ordering(740) 00:14:31.975 fused_ordering(741) 00:14:31.975 fused_ordering(742) 00:14:31.975 fused_ordering(743) 00:14:31.975 fused_ordering(744) 00:14:31.975 fused_ordering(745) 00:14:31.975 fused_ordering(746) 00:14:31.975 fused_ordering(747) 00:14:31.975 fused_ordering(748) 00:14:31.975 fused_ordering(749) 00:14:31.975 fused_ordering(750) 00:14:31.975 fused_ordering(751) 00:14:31.975 fused_ordering(752) 00:14:31.975 fused_ordering(753) 00:14:31.975 fused_ordering(754) 00:14:31.975 fused_ordering(755) 00:14:31.975 fused_ordering(756) 00:14:31.975 fused_ordering(757) 00:14:31.975 fused_ordering(758) 00:14:31.975 fused_ordering(759) 00:14:31.975 fused_ordering(760) 00:14:31.975 fused_ordering(761) 00:14:31.975 fused_ordering(762) 00:14:31.976 fused_ordering(763) 00:14:31.976 fused_ordering(764) 00:14:31.976 fused_ordering(765) 00:14:31.976 fused_ordering(766) 00:14:31.976 fused_ordering(767) 00:14:31.976 fused_ordering(768) 00:14:31.976 fused_ordering(769) 00:14:31.976 fused_ordering(770) 00:14:31.976 fused_ordering(771) 00:14:31.976 fused_ordering(772) 00:14:31.976 fused_ordering(773) 00:14:31.976 fused_ordering(774) 00:14:31.976 fused_ordering(775) 00:14:31.976 fused_ordering(776) 00:14:31.976 fused_ordering(777) 00:14:31.976 fused_ordering(778) 00:14:31.976 fused_ordering(779) 00:14:31.976 fused_ordering(780) 00:14:31.976 fused_ordering(781) 00:14:31.976 fused_ordering(782) 00:14:31.976 fused_ordering(783) 00:14:31.976 fused_ordering(784) 00:14:31.976 fused_ordering(785) 00:14:31.976 fused_ordering(786) 00:14:31.976 fused_ordering(787) 00:14:31.976 fused_ordering(788) 00:14:31.976 fused_ordering(789) 00:14:31.976 fused_ordering(790) 00:14:31.976 fused_ordering(791) 00:14:31.976 fused_ordering(792) 00:14:31.976 fused_ordering(793) 00:14:31.976 fused_ordering(794) 00:14:31.976 fused_ordering(795) 00:14:31.976 fused_ordering(796) 00:14:31.976 fused_ordering(797) 00:14:31.976 fused_ordering(798) 00:14:31.976 fused_ordering(799) 00:14:31.976 fused_ordering(800) 00:14:31.976 fused_ordering(801) 00:14:31.976 fused_ordering(802) 00:14:31.976 fused_ordering(803) 00:14:31.976 fused_ordering(804) 00:14:31.976 fused_ordering(805) 00:14:31.976 fused_ordering(806) 00:14:31.976 fused_ordering(807) 00:14:31.976 fused_ordering(808) 00:14:31.976 fused_ordering(809) 00:14:31.976 fused_ordering(810) 00:14:31.976 fused_ordering(811) 00:14:31.976 fused_ordering(812) 00:14:31.976 fused_ordering(813) 00:14:31.976 fused_ordering(814) 00:14:31.976 fused_ordering(815) 00:14:31.976 fused_ordering(816) 00:14:31.976 fused_ordering(817) 00:14:31.976 fused_ordering(818) 00:14:31.976 fused_ordering(819) 00:14:31.976 fused_ordering(820) 00:14:32.541 fused_ordering(821) 00:14:32.541 fused_ordering(822) 00:14:32.541 fused_ordering(823) 00:14:32.541 fused_ordering(824) 00:14:32.541 fused_ordering(825) 00:14:32.541 fused_ordering(826) 00:14:32.541 fused_ordering(827) 00:14:32.541 fused_ordering(828) 00:14:32.541 fused_ordering(829) 00:14:32.541 fused_ordering(830) 00:14:32.541 fused_ordering(831) 00:14:32.541 fused_ordering(832) 00:14:32.541 fused_ordering(833) 00:14:32.541 fused_ordering(834) 00:14:32.541 fused_ordering(835) 00:14:32.541 fused_ordering(836) 00:14:32.541 fused_ordering(837) 00:14:32.541 fused_ordering(838) 00:14:32.541 fused_ordering(839) 00:14:32.541 fused_ordering(840) 00:14:32.541 fused_ordering(841) 00:14:32.541 fused_ordering(842) 00:14:32.541 fused_ordering(843) 00:14:32.541 fused_ordering(844) 00:14:32.541 fused_ordering(845) 00:14:32.541 fused_ordering(846) 00:14:32.541 fused_ordering(847) 00:14:32.541 fused_ordering(848) 00:14:32.541 fused_ordering(849) 00:14:32.541 fused_ordering(850) 00:14:32.541 fused_ordering(851) 00:14:32.541 fused_ordering(852) 00:14:32.541 fused_ordering(853) 00:14:32.541 fused_ordering(854) 00:14:32.541 fused_ordering(855) 00:14:32.541 fused_ordering(856) 00:14:32.541 fused_ordering(857) 00:14:32.541 fused_ordering(858) 00:14:32.541 fused_ordering(859) 00:14:32.541 fused_ordering(860) 00:14:32.541 fused_ordering(861) 00:14:32.541 fused_ordering(862) 00:14:32.541 fused_ordering(863) 00:14:32.541 fused_ordering(864) 00:14:32.541 fused_ordering(865) 00:14:32.541 fused_ordering(866) 00:14:32.541 fused_ordering(867) 00:14:32.541 fused_ordering(868) 00:14:32.541 fused_ordering(869) 00:14:32.541 fused_ordering(870) 00:14:32.541 fused_ordering(871) 00:14:32.541 fused_ordering(872) 00:14:32.541 fused_ordering(873) 00:14:32.541 fused_ordering(874) 00:14:32.541 fused_ordering(875) 00:14:32.541 fused_ordering(876) 00:14:32.541 fused_ordering(877) 00:14:32.541 fused_ordering(878) 00:14:32.541 fused_ordering(879) 00:14:32.541 fused_ordering(880) 00:14:32.541 fused_ordering(881) 00:14:32.541 fused_ordering(882) 00:14:32.541 fused_ordering(883) 00:14:32.541 fused_ordering(884) 00:14:32.541 fused_ordering(885) 00:14:32.541 fused_ordering(886) 00:14:32.541 fused_ordering(887) 00:14:32.541 fused_ordering(888) 00:14:32.541 fused_ordering(889) 00:14:32.541 fused_ordering(890) 00:14:32.541 fused_ordering(891) 00:14:32.541 fused_ordering(892) 00:14:32.541 fused_ordering(893) 00:14:32.541 fused_ordering(894) 00:14:32.541 fused_ordering(895) 00:14:32.541 fused_ordering(896) 00:14:32.541 fused_ordering(897) 00:14:32.541 fused_ordering(898) 00:14:32.541 fused_ordering(899) 00:14:32.541 fused_ordering(900) 00:14:32.541 fused_ordering(901) 00:14:32.541 fused_ordering(902) 00:14:32.541 fused_ordering(903) 00:14:32.541 fused_ordering(904) 00:14:32.541 fused_ordering(905) 00:14:32.541 fused_ordering(906) 00:14:32.541 fused_ordering(907) 00:14:32.541 fused_ordering(908) 00:14:32.541 fused_ordering(909) 00:14:32.541 fused_ordering(910) 00:14:32.541 fused_ordering(911) 00:14:32.541 fused_ordering(912) 00:14:32.541 fused_ordering(913) 00:14:32.541 fused_ordering(914) 00:14:32.541 fused_ordering(915) 00:14:32.541 fused_ordering(916) 00:14:32.541 fused_ordering(917) 00:14:32.541 fused_ordering(918) 00:14:32.541 fused_ordering(919) 00:14:32.541 fused_ordering(920) 00:14:32.541 fused_ordering(921) 00:14:32.541 fused_ordering(922) 00:14:32.541 fused_ordering(923) 00:14:32.541 fused_ordering(924) 00:14:32.541 fused_ordering(925) 00:14:32.541 fused_ordering(926) 00:14:32.541 fused_ordering(927) 00:14:32.541 fused_ordering(928) 00:14:32.541 fused_ordering(929) 00:14:32.541 fused_ordering(930) 00:14:32.541 fused_ordering(931) 00:14:32.541 fused_ordering(932) 00:14:32.541 fused_ordering(933) 00:14:32.541 fused_ordering(934) 00:14:32.541 fused_ordering(935) 00:14:32.541 fused_ordering(936) 00:14:32.541 fused_ordering(937) 00:14:32.541 fused_ordering(938) 00:14:32.541 fused_ordering(939) 00:14:32.541 fused_ordering(940) 00:14:32.541 fused_ordering(941) 00:14:32.541 fused_ordering(942) 00:14:32.541 fused_ordering(943) 00:14:32.541 fused_ordering(944) 00:14:32.541 fused_ordering(945) 00:14:32.541 fused_ordering(946) 00:14:32.541 fused_ordering(947) 00:14:32.541 fused_ordering(948) 00:14:32.541 fused_ordering(949) 00:14:32.541 fused_ordering(950) 00:14:32.541 fused_ordering(951) 00:14:32.541 fused_ordering(952) 00:14:32.541 fused_ordering(953) 00:14:32.541 fused_ordering(954) 00:14:32.541 fused_ordering(955) 00:14:32.541 fused_ordering(956) 00:14:32.541 fused_ordering(957) 00:14:32.541 fused_ordering(958) 00:14:32.541 fused_ordering(959) 00:14:32.541 fused_ordering(960) 00:14:32.541 fused_ordering(961) 00:14:32.541 fused_ordering(962) 00:14:32.541 fused_ordering(963) 00:14:32.541 fused_ordering(964) 00:14:32.541 fused_ordering(965) 00:14:32.541 fused_ordering(966) 00:14:32.541 fused_ordering(967) 00:14:32.541 fused_ordering(968) 00:14:32.541 fused_ordering(969) 00:14:32.541 fused_ordering(970) 00:14:32.541 fused_ordering(971) 00:14:32.541 fused_ordering(972) 00:14:32.541 fused_ordering(973) 00:14:32.541 fused_ordering(974) 00:14:32.541 fused_ordering(975) 00:14:32.541 fused_ordering(976) 00:14:32.541 fused_ordering(977) 00:14:32.541 fused_ordering(978) 00:14:32.541 fused_ordering(979) 00:14:32.541 fused_ordering(980) 00:14:32.541 fused_ordering(981) 00:14:32.541 fused_ordering(982) 00:14:32.541 fused_ordering(983) 00:14:32.541 fused_ordering(984) 00:14:32.541 fused_ordering(985) 00:14:32.541 fused_ordering(986) 00:14:32.541 fused_ordering(987) 00:14:32.541 fused_ordering(988) 00:14:32.541 fused_ordering(989) 00:14:32.541 fused_ordering(990) 00:14:32.541 fused_ordering(991) 00:14:32.541 fused_ordering(992) 00:14:32.541 fused_ordering(993) 00:14:32.541 fused_ordering(994) 00:14:32.541 fused_ordering(995) 00:14:32.541 fused_ordering(996) 00:14:32.541 fused_ordering(997) 00:14:32.541 fused_ordering(998) 00:14:32.541 fused_ordering(999) 00:14:32.541 fused_ordering(1000) 00:14:32.541 fused_ordering(1001) 00:14:32.541 fused_ordering(1002) 00:14:32.541 fused_ordering(1003) 00:14:32.541 fused_ordering(1004) 00:14:32.541 fused_ordering(1005) 00:14:32.541 fused_ordering(1006) 00:14:32.541 fused_ordering(1007) 00:14:32.541 fused_ordering(1008) 00:14:32.541 fused_ordering(1009) 00:14:32.541 fused_ordering(1010) 00:14:32.541 fused_ordering(1011) 00:14:32.541 fused_ordering(1012) 00:14:32.541 fused_ordering(1013) 00:14:32.541 fused_ordering(1014) 00:14:32.541 fused_ordering(1015) 00:14:32.541 fused_ordering(1016) 00:14:32.541 fused_ordering(1017) 00:14:32.541 fused_ordering(1018) 00:14:32.541 fused_ordering(1019) 00:14:32.541 fused_ordering(1020) 00:14:32.541 fused_ordering(1021) 00:14:32.541 fused_ordering(1022) 00:14:32.541 fused_ordering(1023) 00:14:32.541 06:40:37 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:32.541 06:40:37 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:32.541 06:40:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:32.541 06:40:37 -- nvmf/common.sh@117 -- # sync 00:14:32.541 06:40:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:32.541 06:40:37 -- nvmf/common.sh@120 -- # set +e 00:14:32.541 06:40:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:32.541 06:40:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:32.541 rmmod nvme_tcp 00:14:32.541 rmmod nvme_fabrics 00:14:32.541 rmmod nvme_keyring 00:14:32.799 06:40:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:32.799 06:40:37 -- nvmf/common.sh@124 -- # set -e 00:14:32.799 06:40:37 -- nvmf/common.sh@125 -- # return 0 00:14:32.799 06:40:37 -- nvmf/common.sh@478 -- # '[' -n 4143052 ']' 00:14:32.799 06:40:37 -- nvmf/common.sh@479 -- # killprocess 4143052 00:14:32.799 06:40:37 -- common/autotest_common.sh@936 -- # '[' -z 4143052 ']' 00:14:32.799 06:40:37 -- common/autotest_common.sh@940 -- # kill -0 4143052 00:14:32.799 06:40:37 -- common/autotest_common.sh@941 -- # uname 00:14:32.799 06:40:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:32.799 06:40:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4143052 00:14:32.799 06:40:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:32.799 06:40:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:32.799 06:40:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4143052' 00:14:32.799 killing process with pid 4143052 00:14:32.799 06:40:37 -- common/autotest_common.sh@955 -- # kill 4143052 00:14:32.799 06:40:37 -- common/autotest_common.sh@960 -- # wait 4143052 00:14:33.057 06:40:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:33.057 06:40:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:33.057 06:40:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:33.057 06:40:37 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:33.057 06:40:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:33.057 06:40:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.057 06:40:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.057 06:40:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.962 06:40:39 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:34.962 00:14:34.962 real 0m8.067s 00:14:34.962 user 0m5.582s 00:14:34.962 sys 0m3.875s 00:14:34.962 06:40:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:34.962 06:40:39 -- common/autotest_common.sh@10 -- # set +x 00:14:34.962 ************************************ 00:14:34.962 END TEST nvmf_fused_ordering 00:14:34.962 ************************************ 00:14:34.962 06:40:39 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:34.962 06:40:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:34.962 06:40:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:34.962 06:40:39 -- common/autotest_common.sh@10 -- # set +x 00:14:35.220 ************************************ 00:14:35.220 START TEST nvmf_delete_subsystem 00:14:35.220 ************************************ 00:14:35.220 06:40:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:35.220 * Looking for test storage... 00:14:35.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:35.220 06:40:39 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:35.220 06:40:39 -- nvmf/common.sh@7 -- # uname -s 00:14:35.220 06:40:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.220 06:40:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.220 06:40:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.220 06:40:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.220 06:40:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.220 06:40:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.221 06:40:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.221 06:40:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.221 06:40:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.221 06:40:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.221 06:40:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:35.221 06:40:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:35.221 06:40:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.221 06:40:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.221 06:40:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:35.221 06:40:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.221 06:40:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:35.221 06:40:39 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.221 06:40:39 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.221 06:40:39 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.221 06:40:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.221 06:40:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.221 06:40:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.221 06:40:39 -- paths/export.sh@5 -- # export PATH 00:14:35.221 06:40:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.221 06:40:39 -- nvmf/common.sh@47 -- # : 0 00:14:35.221 06:40:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:35.221 06:40:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:35.221 06:40:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.221 06:40:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.221 06:40:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.221 06:40:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:35.221 06:40:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:35.221 06:40:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:35.221 06:40:39 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:35.221 06:40:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:35.221 06:40:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:35.221 06:40:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:35.221 06:40:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:35.221 06:40:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:35.221 06:40:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.221 06:40:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.221 06:40:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.221 06:40:39 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:35.221 06:40:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:35.221 06:40:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:35.221 06:40:39 -- common/autotest_common.sh@10 -- # set +x 00:14:37.754 06:40:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:37.754 06:40:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:37.754 06:40:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:37.754 06:40:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:37.754 06:40:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:37.754 06:40:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:37.754 06:40:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:37.754 06:40:41 -- nvmf/common.sh@295 -- # net_devs=() 00:14:37.754 06:40:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:37.754 06:40:41 -- nvmf/common.sh@296 -- # e810=() 00:14:37.754 06:40:41 -- nvmf/common.sh@296 -- # local -ga e810 00:14:37.754 06:40:41 -- nvmf/common.sh@297 -- # x722=() 00:14:37.754 06:40:41 -- nvmf/common.sh@297 -- # local -ga x722 00:14:37.754 06:40:41 -- nvmf/common.sh@298 -- # mlx=() 00:14:37.754 06:40:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:37.754 06:40:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:37.754 06:40:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:37.754 06:40:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:37.754 06:40:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:37.754 06:40:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:37.754 06:40:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:37.754 06:40:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:37.754 06:40:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:37.754 06:40:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:37.754 06:40:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:37.754 06:40:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:37.754 06:40:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:37.754 06:40:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:37.754 06:40:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:37.754 06:40:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:37.754 06:40:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:37.754 06:40:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:37.754 06:40:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:37.754 06:40:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:37.754 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:37.754 06:40:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:37.754 06:40:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:37.754 06:40:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.754 06:40:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.754 06:40:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:37.754 06:40:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:37.754 06:40:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:37.754 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:37.754 06:40:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:37.754 06:40:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:37.754 06:40:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.754 06:40:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.754 06:40:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:37.754 06:40:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:37.754 06:40:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:37.754 06:40:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:37.754 06:40:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:37.754 06:40:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.754 06:40:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:37.754 06:40:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.754 06:40:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:37.754 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:37.754 06:40:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.754 06:40:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:37.754 06:40:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.754 06:40:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:37.754 06:40:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.754 06:40:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:37.754 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:37.754 06:40:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.754 06:40:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:37.754 06:40:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:37.754 06:40:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:37.754 06:40:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:37.754 06:40:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:37.754 06:40:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:37.754 06:40:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:37.754 06:40:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:37.754 06:40:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:37.754 06:40:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:37.754 06:40:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:37.754 06:40:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:37.754 06:40:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:37.754 06:40:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:37.754 06:40:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:37.754 06:40:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:37.754 06:40:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:37.754 06:40:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:37.754 06:40:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:37.754 06:40:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:37.754 06:40:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:37.754 06:40:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:37.754 06:40:41 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:37.754 06:40:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:37.754 06:40:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:37.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:37.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:14:37.754 00:14:37.754 --- 10.0.0.2 ping statistics --- 00:14:37.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.754 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:14:37.754 06:40:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:37.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:37.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:14:37.754 00:14:37.754 --- 10.0.0.1 ping statistics --- 00:14:37.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.754 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:14:37.754 06:40:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:37.754 06:40:41 -- nvmf/common.sh@411 -- # return 0 00:14:37.754 06:40:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:37.754 06:40:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:37.754 06:40:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:37.754 06:40:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:37.754 06:40:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:37.754 06:40:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:37.754 06:40:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:37.754 06:40:41 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:37.754 06:40:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:37.754 06:40:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:37.754 06:40:41 -- common/autotest_common.sh@10 -- # set +x 00:14:37.754 06:40:41 -- nvmf/common.sh@470 -- # nvmfpid=4145403 00:14:37.754 06:40:41 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:37.754 06:40:41 -- nvmf/common.sh@471 -- # waitforlisten 4145403 00:14:37.754 06:40:41 -- common/autotest_common.sh@817 -- # '[' -z 4145403 ']' 00:14:37.754 06:40:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.754 06:40:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:37.754 06:40:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.754 06:40:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:37.754 06:40:41 -- common/autotest_common.sh@10 -- # set +x 00:14:37.754 [2024-04-17 06:40:41.969720] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:14:37.754 [2024-04-17 06:40:41.969821] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:37.754 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.754 [2024-04-17 06:40:42.043982] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:37.754 [2024-04-17 06:40:42.136635] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:37.754 [2024-04-17 06:40:42.136710] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:37.754 [2024-04-17 06:40:42.136735] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:37.754 [2024-04-17 06:40:42.136749] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:37.754 [2024-04-17 06:40:42.136760] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:37.754 [2024-04-17 06:40:42.137611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.754 [2024-04-17 06:40:42.137623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.754 06:40:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:37.754 06:40:42 -- common/autotest_common.sh@850 -- # return 0 00:14:37.754 06:40:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:37.754 06:40:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:37.754 06:40:42 -- common/autotest_common.sh@10 -- # set +x 00:14:37.754 06:40:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:37.754 06:40:42 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:37.754 06:40:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.754 06:40:42 -- common/autotest_common.sh@10 -- # set +x 00:14:37.755 [2024-04-17 06:40:42.276005] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:37.755 06:40:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.755 06:40:42 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:37.755 06:40:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.755 06:40:42 -- common/autotest_common.sh@10 -- # set +x 00:14:37.755 06:40:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.755 06:40:42 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:37.755 06:40:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.755 06:40:42 -- common/autotest_common.sh@10 -- # set +x 00:14:37.755 [2024-04-17 06:40:42.292212] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.755 06:40:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.755 06:40:42 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:37.755 06:40:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.755 06:40:42 -- common/autotest_common.sh@10 -- # set +x 00:14:37.755 NULL1 00:14:37.755 06:40:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.755 06:40:42 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:37.755 06:40:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.755 06:40:42 -- common/autotest_common.sh@10 -- # set +x 00:14:37.755 Delay0 00:14:37.755 06:40:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.755 06:40:42 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:37.755 06:40:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.755 06:40:42 -- common/autotest_common.sh@10 -- # set +x 00:14:37.755 06:40:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.755 06:40:42 -- target/delete_subsystem.sh@28 -- # perf_pid=4145550 00:14:37.755 06:40:42 -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:37.755 06:40:42 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:37.755 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.012 [2024-04-17 06:40:42.366955] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:39.910 06:40:44 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:39.910 06:40:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.910 06:40:44 -- common/autotest_common.sh@10 -- # set +x 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 starting I/O failed: -6 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 starting I/O failed: -6 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 starting I/O failed: -6 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 starting I/O failed: -6 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 starting I/O failed: -6 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 starting I/O failed: -6 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 starting I/O failed: -6 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 starting I/O failed: -6 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 starting I/O failed: -6 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 starting I/O failed: -6 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 starting I/O failed: -6 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 starting I/O failed: -6 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 starting I/O failed: -6 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 starting I/O failed: -6 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 [2024-04-17 06:40:44.409725] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9d1000c250 is same with the state(5) to be set 00:14:39.910 starting I/O failed: -6 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 starting I/O failed: -6 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 starting I/O failed: -6 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 starting I/O failed: -6 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 starting I/O failed: -6 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 starting I/O failed: -6 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 starting I/O failed: -6 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Write completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.910 Read completed with error (sct=0, sc=8) 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 starting I/O failed: -6 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 starting I/O failed: -6 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 starting I/O failed: -6 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 Write completed with error (sct=0, sc=8) 00:14:39.911 starting I/O failed: -6 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 starting I/O failed: -6 00:14:39.911 Write completed with error (sct=0, sc=8) 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 starting I/O failed: -6 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 Write completed with error (sct=0, sc=8) 00:14:39.911 starting I/O failed: -6 00:14:39.911 Write completed with error (sct=0, sc=8) 00:14:39.911 Write completed with error (sct=0, sc=8) 00:14:39.911 starting I/O failed: -6 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 Write completed with error (sct=0, sc=8) 00:14:39.911 starting I/O failed: -6 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 starting I/O failed: -6 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 starting I/O failed: -6 00:14:39.911 Write completed with error (sct=0, sc=8) 00:14:39.911 Write completed with error (sct=0, sc=8) 00:14:39.911 starting I/O failed: -6 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 Write completed with error (sct=0, sc=8) 00:14:39.911 starting I/O failed: -6 00:14:39.911 Write completed with error (sct=0, sc=8) 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 starting I/O failed: -6 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 starting I/O failed: -6 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 starting I/O failed: -6 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 starting I/O failed: -6 00:14:39.911 Write completed with error (sct=0, sc=8) 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 starting I/O failed: -6 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 starting I/O failed: -6 00:14:39.911 Write completed with error (sct=0, sc=8) 00:14:39.911 Write completed with error (sct=0, sc=8) 00:14:39.911 starting I/O failed: -6 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 starting I/O failed: -6 00:14:39.911 Write completed with error (sct=0, sc=8) 00:14:39.911 Write completed with error (sct=0, sc=8) 00:14:39.911 starting I/O failed: -6 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 starting I/O failed: -6 00:14:39.911 Write completed with error (sct=0, sc=8) 00:14:39.911 Write completed with error (sct=0, sc=8) 00:14:39.911 starting I/O failed: -6 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 starting I/O failed: -6 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 starting I/O failed: -6 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 starting I/O failed: -6 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 starting I/O failed: -6 00:14:39.911 Read completed with error (sct=0, sc=8) 00:14:39.911 starting I/O failed: -6 00:14:39.911 starting I/O failed: -6 00:14:39.911 starting I/O failed: -6 00:14:39.911 starting I/O failed: -6 00:14:39.911 starting I/O failed: -6 00:14:39.911 starting I/O failed: -6 00:14:40.844 [2024-04-17 06:40:45.384250] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102d4c0 is same with the state(5) to be set 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Write completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Write completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Write completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Write completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Write completed with error (sct=0, sc=8) 00:14:40.844 Write completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Write completed with error (sct=0, sc=8) 00:14:40.844 Write completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Write completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Write completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Write completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Write completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 [2024-04-17 06:40:45.407513] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1010b80 is same with the state(5) to be set 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Write completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Write completed with error (sct=0, sc=8) 00:14:40.844 Write completed with error (sct=0, sc=8) 00:14:40.844 Write completed with error (sct=0, sc=8) 00:14:40.844 Write completed with error (sct=0, sc=8) 00:14:40.844 Write completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Write completed with error (sct=0, sc=8) 00:14:40.844 Write completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Write completed with error (sct=0, sc=8) 00:14:40.844 Write completed with error (sct=0, sc=8) 00:14:40.844 Read completed with error (sct=0, sc=8) 00:14:40.844 Write completed with error (sct=0, sc=8) 00:14:40.844 Write completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Write completed with error (sct=0, sc=8) 00:14:40.845 Write completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Write completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Write completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Write completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 [2024-04-17 06:40:45.407777] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1015ce0 is same with the state(5) to be set 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Write completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Write completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Write completed with error (sct=0, sc=8) 00:14:40.845 Write completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Write completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Write completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Write completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 [2024-04-17 06:40:45.412218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9d1000bf90 is same with the state(5) to be set 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Write completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Write completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Write completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Read completed with error (sct=0, sc=8) 00:14:40.845 Write completed with error (sct=0, sc=8) 00:14:40.845 Write completed with error (sct=0, sc=8) 00:14:40.845 Write completed with error (sct=0, sc=8) 00:14:40.845 [2024-04-17 06:40:45.412869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9d1000c510 is same with the state(5) to be set 00:14:40.845 [2024-04-17 06:40:45.413377] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102d4c0 (9): Bad file descriptor 00:14:40.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:40.845 06:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:40.845 06:40:45 -- target/delete_subsystem.sh@34 -- # delay=0 00:14:40.845 06:40:45 -- target/delete_subsystem.sh@35 -- # kill -0 4145550 00:14:40.845 06:40:45 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:40.845 Initializing NVMe Controllers 00:14:40.845 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:40.845 Controller IO queue size 128, less than required. 00:14:40.845 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:40.845 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:40.845 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:40.845 Initialization complete. Launching workers. 00:14:40.845 ======================================================== 00:14:40.845 Latency(us) 00:14:40.845 Device Information : IOPS MiB/s Average min max 00:14:40.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 187.45 0.09 901358.35 688.66 1013705.79 00:14:40.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.22 0.08 949208.32 585.41 2004111.26 00:14:40.845 ======================================================== 00:14:40.845 Total : 342.67 0.17 923032.79 585.41 2004111.26 00:14:40.845 00:14:41.411 06:40:45 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:41.411 06:40:45 -- target/delete_subsystem.sh@35 -- # kill -0 4145550 00:14:41.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4145550) - No such process 00:14:41.411 06:40:45 -- target/delete_subsystem.sh@45 -- # NOT wait 4145550 00:14:41.411 06:40:45 -- common/autotest_common.sh@638 -- # local es=0 00:14:41.411 06:40:45 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 4145550 00:14:41.411 06:40:45 -- common/autotest_common.sh@626 -- # local arg=wait 00:14:41.411 06:40:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:41.411 06:40:45 -- common/autotest_common.sh@630 -- # type -t wait 00:14:41.411 06:40:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:41.411 06:40:45 -- common/autotest_common.sh@641 -- # wait 4145550 00:14:41.411 06:40:45 -- common/autotest_common.sh@641 -- # es=1 00:14:41.411 06:40:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:41.411 06:40:45 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:41.411 06:40:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:41.411 06:40:45 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:41.411 06:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:41.411 06:40:45 -- common/autotest_common.sh@10 -- # set +x 00:14:41.411 06:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:41.411 06:40:45 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:41.411 06:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:41.411 06:40:45 -- common/autotest_common.sh@10 -- # set +x 00:14:41.411 [2024-04-17 06:40:45.937548] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.411 06:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:41.411 06:40:45 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:41.411 06:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:41.411 06:40:45 -- common/autotest_common.sh@10 -- # set +x 00:14:41.411 06:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:41.411 06:40:45 -- target/delete_subsystem.sh@54 -- # perf_pid=4145948 00:14:41.411 06:40:45 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:41.411 06:40:45 -- target/delete_subsystem.sh@56 -- # delay=0 00:14:41.411 06:40:45 -- target/delete_subsystem.sh@57 -- # kill -0 4145948 00:14:41.411 06:40:45 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:41.411 EAL: No free 2048 kB hugepages reported on node 1 00:14:41.411 [2024-04-17 06:40:45.999486] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:41.975 06:40:46 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:41.975 06:40:46 -- target/delete_subsystem.sh@57 -- # kill -0 4145948 00:14:41.975 06:40:46 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:42.602 06:40:46 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:42.602 06:40:46 -- target/delete_subsystem.sh@57 -- # kill -0 4145948 00:14:42.602 06:40:46 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:42.859 06:40:47 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:42.859 06:40:47 -- target/delete_subsystem.sh@57 -- # kill -0 4145948 00:14:42.859 06:40:47 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:43.423 06:40:47 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:43.423 06:40:47 -- target/delete_subsystem.sh@57 -- # kill -0 4145948 00:14:43.423 06:40:47 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:43.988 06:40:48 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:43.988 06:40:48 -- target/delete_subsystem.sh@57 -- # kill -0 4145948 00:14:43.988 06:40:48 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:44.553 06:40:48 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:44.553 06:40:48 -- target/delete_subsystem.sh@57 -- # kill -0 4145948 00:14:44.553 06:40:48 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:44.553 Initializing NVMe Controllers 00:14:44.553 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:44.553 Controller IO queue size 128, less than required. 00:14:44.553 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:44.553 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:44.553 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:44.553 Initialization complete. Launching workers. 00:14:44.553 ======================================================== 00:14:44.553 Latency(us) 00:14:44.553 Device Information : IOPS MiB/s Average min max 00:14:44.553 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003928.57 1000210.26 1013429.46 00:14:44.553 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006274.50 1000276.80 1042816.45 00:14:44.553 ======================================================== 00:14:44.553 Total : 256.00 0.12 1005101.53 1000210.26 1042816.45 00:14:44.553 00:14:45.119 06:40:49 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:45.119 06:40:49 -- target/delete_subsystem.sh@57 -- # kill -0 4145948 00:14:45.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4145948) - No such process 00:14:45.119 06:40:49 -- target/delete_subsystem.sh@67 -- # wait 4145948 00:14:45.119 06:40:49 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:45.119 06:40:49 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:45.119 06:40:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:45.119 06:40:49 -- nvmf/common.sh@117 -- # sync 00:14:45.119 06:40:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:45.119 06:40:49 -- nvmf/common.sh@120 -- # set +e 00:14:45.119 06:40:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:45.119 06:40:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:45.119 rmmod nvme_tcp 00:14:45.119 rmmod nvme_fabrics 00:14:45.119 rmmod nvme_keyring 00:14:45.119 06:40:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:45.119 06:40:49 -- nvmf/common.sh@124 -- # set -e 00:14:45.119 06:40:49 -- nvmf/common.sh@125 -- # return 0 00:14:45.119 06:40:49 -- nvmf/common.sh@478 -- # '[' -n 4145403 ']' 00:14:45.119 06:40:49 -- nvmf/common.sh@479 -- # killprocess 4145403 00:14:45.119 06:40:49 -- common/autotest_common.sh@936 -- # '[' -z 4145403 ']' 00:14:45.119 06:40:49 -- common/autotest_common.sh@940 -- # kill -0 4145403 00:14:45.119 06:40:49 -- common/autotest_common.sh@941 -- # uname 00:14:45.119 06:40:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:45.119 06:40:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4145403 00:14:45.119 06:40:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:45.119 06:40:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:45.119 06:40:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4145403' 00:14:45.119 killing process with pid 4145403 00:14:45.119 06:40:49 -- common/autotest_common.sh@955 -- # kill 4145403 00:14:45.119 06:40:49 -- common/autotest_common.sh@960 -- # wait 4145403 00:14:45.379 06:40:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:45.379 06:40:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:45.379 06:40:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:45.379 06:40:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:45.379 06:40:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:45.379 06:40:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.379 06:40:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:45.379 06:40:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.285 06:40:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:47.285 00:14:47.285 real 0m12.232s 00:14:47.285 user 0m27.408s 00:14:47.285 sys 0m3.003s 00:14:47.285 06:40:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:47.285 06:40:51 -- common/autotest_common.sh@10 -- # set +x 00:14:47.285 ************************************ 00:14:47.285 END TEST nvmf_delete_subsystem 00:14:47.285 ************************************ 00:14:47.285 06:40:51 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:47.285 06:40:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:47.285 06:40:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:47.285 06:40:51 -- common/autotest_common.sh@10 -- # set +x 00:14:47.574 ************************************ 00:14:47.574 START TEST nvmf_ns_masking 00:14:47.574 ************************************ 00:14:47.574 06:40:51 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:47.574 * Looking for test storage... 00:14:47.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:47.574 06:40:52 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:47.574 06:40:52 -- nvmf/common.sh@7 -- # uname -s 00:14:47.574 06:40:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:47.574 06:40:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:47.574 06:40:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:47.574 06:40:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:47.574 06:40:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:47.574 06:40:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:47.574 06:40:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:47.574 06:40:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:47.574 06:40:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:47.574 06:40:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:47.574 06:40:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:47.574 06:40:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:47.574 06:40:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:47.574 06:40:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:47.574 06:40:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:47.574 06:40:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:47.574 06:40:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:47.574 06:40:52 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.574 06:40:52 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.574 06:40:52 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.574 06:40:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.574 06:40:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.574 06:40:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.574 06:40:52 -- paths/export.sh@5 -- # export PATH 00:14:47.575 06:40:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.575 06:40:52 -- nvmf/common.sh@47 -- # : 0 00:14:47.575 06:40:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:47.575 06:40:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:47.575 06:40:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:47.575 06:40:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:47.575 06:40:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:47.575 06:40:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:47.575 06:40:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:47.575 06:40:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:47.575 06:40:52 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:47.575 06:40:52 -- target/ns_masking.sh@11 -- # loops=5 00:14:47.575 06:40:52 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:47.575 06:40:52 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:14:47.575 06:40:52 -- target/ns_masking.sh@15 -- # uuidgen 00:14:47.575 06:40:52 -- target/ns_masking.sh@15 -- # HOSTID=f5b1c794-451e-46d6-ae18-57604d74d538 00:14:47.575 06:40:52 -- target/ns_masking.sh@44 -- # nvmftestinit 00:14:47.575 06:40:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:47.575 06:40:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:47.575 06:40:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:47.575 06:40:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:47.575 06:40:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:47.575 06:40:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.575 06:40:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.575 06:40:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.575 06:40:52 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:47.575 06:40:52 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:47.575 06:40:52 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:47.575 06:40:52 -- common/autotest_common.sh@10 -- # set +x 00:14:49.480 06:40:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:49.480 06:40:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:49.480 06:40:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:49.480 06:40:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:49.480 06:40:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:49.480 06:40:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:49.480 06:40:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:49.480 06:40:53 -- nvmf/common.sh@295 -- # net_devs=() 00:14:49.480 06:40:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:49.480 06:40:53 -- nvmf/common.sh@296 -- # e810=() 00:14:49.480 06:40:53 -- nvmf/common.sh@296 -- # local -ga e810 00:14:49.480 06:40:53 -- nvmf/common.sh@297 -- # x722=() 00:14:49.480 06:40:53 -- nvmf/common.sh@297 -- # local -ga x722 00:14:49.480 06:40:53 -- nvmf/common.sh@298 -- # mlx=() 00:14:49.480 06:40:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:49.480 06:40:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:49.480 06:40:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:49.480 06:40:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:49.480 06:40:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:49.480 06:40:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:49.480 06:40:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:49.480 06:40:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:49.480 06:40:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:49.480 06:40:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:49.480 06:40:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:49.480 06:40:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:49.480 06:40:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:49.480 06:40:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:49.480 06:40:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:49.480 06:40:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:49.480 06:40:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:49.480 06:40:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:49.480 06:40:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:49.480 06:40:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:49.480 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:49.480 06:40:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:49.480 06:40:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:49.480 06:40:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.480 06:40:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.480 06:40:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:49.480 06:40:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:49.480 06:40:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:49.480 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:49.480 06:40:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:49.480 06:40:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:49.480 06:40:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.480 06:40:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.480 06:40:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:49.480 06:40:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:49.480 06:40:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:49.480 06:40:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:49.480 06:40:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:49.480 06:40:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.480 06:40:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:49.480 06:40:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.480 06:40:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:49.480 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:49.480 06:40:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.480 06:40:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:49.480 06:40:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.480 06:40:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:49.480 06:40:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.480 06:40:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:49.480 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:49.480 06:40:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.480 06:40:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:49.480 06:40:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:49.480 06:40:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:49.480 06:40:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:49.480 06:40:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:49.480 06:40:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:49.480 06:40:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:49.480 06:40:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:49.480 06:40:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:49.480 06:40:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:49.480 06:40:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:49.480 06:40:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:49.480 06:40:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:49.480 06:40:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:49.480 06:40:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:49.480 06:40:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:49.480 06:40:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:49.480 06:40:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:49.480 06:40:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:49.480 06:40:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:49.480 06:40:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:49.480 06:40:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:49.738 06:40:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:49.738 06:40:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:49.738 06:40:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:49.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:49.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:14:49.738 00:14:49.738 --- 10.0.0.2 ping statistics --- 00:14:49.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.738 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:14:49.738 06:40:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:49.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:49.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:14:49.738 00:14:49.738 --- 10.0.0.1 ping statistics --- 00:14:49.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.738 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:14:49.738 06:40:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:49.738 06:40:54 -- nvmf/common.sh@411 -- # return 0 00:14:49.738 06:40:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:49.738 06:40:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:49.738 06:40:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:49.738 06:40:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:49.738 06:40:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:49.739 06:40:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:49.739 06:40:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:49.739 06:40:54 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:14:49.739 06:40:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:49.739 06:40:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:49.739 06:40:54 -- common/autotest_common.sh@10 -- # set +x 00:14:49.739 06:40:54 -- nvmf/common.sh@470 -- # nvmfpid=4148299 00:14:49.739 06:40:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:49.739 06:40:54 -- nvmf/common.sh@471 -- # waitforlisten 4148299 00:14:49.739 06:40:54 -- common/autotest_common.sh@817 -- # '[' -z 4148299 ']' 00:14:49.739 06:40:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.739 06:40:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:49.739 06:40:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.739 06:40:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:49.739 06:40:54 -- common/autotest_common.sh@10 -- # set +x 00:14:49.739 [2024-04-17 06:40:54.197710] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:14:49.739 [2024-04-17 06:40:54.197803] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.739 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.739 [2024-04-17 06:40:54.271481] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:49.997 [2024-04-17 06:40:54.365455] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.997 [2024-04-17 06:40:54.365514] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.997 [2024-04-17 06:40:54.365539] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.997 [2024-04-17 06:40:54.365553] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.997 [2024-04-17 06:40:54.365564] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.997 [2024-04-17 06:40:54.365657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.997 [2024-04-17 06:40:54.365711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:49.997 [2024-04-17 06:40:54.365773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:49.997 [2024-04-17 06:40:54.365776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.997 06:40:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:49.997 06:40:54 -- common/autotest_common.sh@850 -- # return 0 00:14:49.997 06:40:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:49.997 06:40:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:49.997 06:40:54 -- common/autotest_common.sh@10 -- # set +x 00:14:49.997 06:40:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:49.997 06:40:54 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:50.254 [2024-04-17 06:40:54.789043] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:50.254 06:40:54 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:14:50.254 06:40:54 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:14:50.254 06:40:54 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:50.512 Malloc1 00:14:50.512 06:40:55 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:50.770 Malloc2 00:14:50.770 06:40:55 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:51.027 06:40:55 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:51.285 06:40:55 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:51.542 [2024-04-17 06:40:56.065568] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.542 06:40:56 -- target/ns_masking.sh@61 -- # connect 00:14:51.542 06:40:56 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f5b1c794-451e-46d6-ae18-57604d74d538 -a 10.0.0.2 -s 4420 -i 4 00:14:51.800 06:40:56 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:14:51.800 06:40:56 -- common/autotest_common.sh@1184 -- # local i=0 00:14:51.800 06:40:56 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:14:51.800 06:40:56 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:14:51.800 06:40:56 -- common/autotest_common.sh@1191 -- # sleep 2 00:14:53.698 06:40:58 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:14:53.698 06:40:58 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:14:53.698 06:40:58 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:14:53.698 06:40:58 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:14:53.698 06:40:58 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:14:53.698 06:40:58 -- common/autotest_common.sh@1194 -- # return 0 00:14:53.698 06:40:58 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:53.698 06:40:58 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:53.956 06:40:58 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:53.956 06:40:58 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:53.956 06:40:58 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:14:53.956 06:40:58 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:53.956 06:40:58 -- target/ns_masking.sh@39 -- # grep 0x1 00:14:53.956 [ 0]:0x1 00:14:53.956 06:40:58 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:53.956 06:40:58 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:53.956 06:40:58 -- target/ns_masking.sh@40 -- # nguid=ef6d6ce0dd9f428892c2b6e1c932621d 00:14:53.956 06:40:58 -- target/ns_masking.sh@41 -- # [[ ef6d6ce0dd9f428892c2b6e1c932621d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:53.956 06:40:58 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:54.213 06:40:58 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:14:54.213 06:40:58 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:54.213 06:40:58 -- target/ns_masking.sh@39 -- # grep 0x1 00:14:54.213 [ 0]:0x1 00:14:54.213 06:40:58 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:54.213 06:40:58 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:54.213 06:40:58 -- target/ns_masking.sh@40 -- # nguid=ef6d6ce0dd9f428892c2b6e1c932621d 00:14:54.213 06:40:58 -- target/ns_masking.sh@41 -- # [[ ef6d6ce0dd9f428892c2b6e1c932621d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:54.213 06:40:58 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:14:54.213 06:40:58 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:54.213 06:40:58 -- target/ns_masking.sh@39 -- # grep 0x2 00:14:54.213 [ 1]:0x2 00:14:54.213 06:40:58 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:54.213 06:40:58 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:54.213 06:40:58 -- target/ns_masking.sh@40 -- # nguid=8d9397d2a902405ab97bb9b12bc39c5d 00:14:54.213 06:40:58 -- target/ns_masking.sh@41 -- # [[ 8d9397d2a902405ab97bb9b12bc39c5d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:54.213 06:40:58 -- target/ns_masking.sh@69 -- # disconnect 00:14:54.213 06:40:58 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:54.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.471 06:40:58 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:54.471 06:40:59 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:55.053 06:40:59 -- target/ns_masking.sh@77 -- # connect 1 00:14:55.053 06:40:59 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f5b1c794-451e-46d6-ae18-57604d74d538 -a 10.0.0.2 -s 4420 -i 4 00:14:55.053 06:40:59 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:55.053 06:40:59 -- common/autotest_common.sh@1184 -- # local i=0 00:14:55.053 06:40:59 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:14:55.053 06:40:59 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:14:55.053 06:40:59 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:14:55.053 06:40:59 -- common/autotest_common.sh@1191 -- # sleep 2 00:14:56.950 06:41:01 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:14:56.950 06:41:01 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:14:56.950 06:41:01 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:14:56.950 06:41:01 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:14:56.950 06:41:01 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:14:56.950 06:41:01 -- common/autotest_common.sh@1194 -- # return 0 00:14:56.950 06:41:01 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:56.950 06:41:01 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:57.207 06:41:01 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:57.207 06:41:01 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:57.207 06:41:01 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:14:57.207 06:41:01 -- common/autotest_common.sh@638 -- # local es=0 00:14:57.207 06:41:01 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:14:57.207 06:41:01 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:14:57.207 06:41:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:57.207 06:41:01 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:14:57.207 06:41:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:57.207 06:41:01 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:14:57.207 06:41:01 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:57.207 06:41:01 -- target/ns_masking.sh@39 -- # grep 0x1 00:14:57.207 06:41:01 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:57.207 06:41:01 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:57.207 06:41:01 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:57.207 06:41:01 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:57.207 06:41:01 -- common/autotest_common.sh@641 -- # es=1 00:14:57.207 06:41:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:57.207 06:41:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:57.207 06:41:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:57.207 06:41:01 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:14:57.207 06:41:01 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:57.207 06:41:01 -- target/ns_masking.sh@39 -- # grep 0x2 00:14:57.207 [ 0]:0x2 00:14:57.207 06:41:01 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:57.207 06:41:01 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:57.207 06:41:01 -- target/ns_masking.sh@40 -- # nguid=8d9397d2a902405ab97bb9b12bc39c5d 00:14:57.207 06:41:01 -- target/ns_masking.sh@41 -- # [[ 8d9397d2a902405ab97bb9b12bc39c5d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:57.207 06:41:01 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:57.464 06:41:01 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:14:57.464 06:41:01 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:57.464 06:41:01 -- target/ns_masking.sh@39 -- # grep 0x1 00:14:57.464 [ 0]:0x1 00:14:57.464 06:41:01 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:57.464 06:41:01 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:57.464 06:41:01 -- target/ns_masking.sh@40 -- # nguid=ef6d6ce0dd9f428892c2b6e1c932621d 00:14:57.464 06:41:01 -- target/ns_masking.sh@41 -- # [[ ef6d6ce0dd9f428892c2b6e1c932621d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:57.464 06:41:01 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:14:57.464 06:41:01 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:57.464 06:41:01 -- target/ns_masking.sh@39 -- # grep 0x2 00:14:57.464 [ 1]:0x2 00:14:57.464 06:41:01 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:57.464 06:41:01 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:57.464 06:41:02 -- target/ns_masking.sh@40 -- # nguid=8d9397d2a902405ab97bb9b12bc39c5d 00:14:57.464 06:41:02 -- target/ns_masking.sh@41 -- # [[ 8d9397d2a902405ab97bb9b12bc39c5d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:57.464 06:41:02 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:57.721 06:41:02 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:14:57.721 06:41:02 -- common/autotest_common.sh@638 -- # local es=0 00:14:57.721 06:41:02 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:14:57.721 06:41:02 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:14:57.721 06:41:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:57.721 06:41:02 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:14:57.721 06:41:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:57.721 06:41:02 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:14:57.721 06:41:02 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:57.721 06:41:02 -- target/ns_masking.sh@39 -- # grep 0x1 00:14:57.721 06:41:02 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:57.721 06:41:02 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:57.721 06:41:02 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:57.721 06:41:02 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:57.721 06:41:02 -- common/autotest_common.sh@641 -- # es=1 00:14:57.721 06:41:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:57.721 06:41:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:57.721 06:41:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:57.721 06:41:02 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:14:57.721 06:41:02 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:57.721 06:41:02 -- target/ns_masking.sh@39 -- # grep 0x2 00:14:57.721 [ 0]:0x2 00:14:57.721 06:41:02 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:57.721 06:41:02 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:57.978 06:41:02 -- target/ns_masking.sh@40 -- # nguid=8d9397d2a902405ab97bb9b12bc39c5d 00:14:57.978 06:41:02 -- target/ns_masking.sh@41 -- # [[ 8d9397d2a902405ab97bb9b12bc39c5d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:57.978 06:41:02 -- target/ns_masking.sh@91 -- # disconnect 00:14:57.978 06:41:02 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:57.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.978 06:41:02 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:58.236 06:41:02 -- target/ns_masking.sh@95 -- # connect 2 00:14:58.236 06:41:02 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f5b1c794-451e-46d6-ae18-57604d74d538 -a 10.0.0.2 -s 4420 -i 4 00:14:58.236 06:41:02 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:58.236 06:41:02 -- common/autotest_common.sh@1184 -- # local i=0 00:14:58.236 06:41:02 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:14:58.236 06:41:02 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:14:58.236 06:41:02 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:14:58.236 06:41:02 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:00.133 06:41:04 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:00.133 06:41:04 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:00.133 06:41:04 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:00.390 06:41:04 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:15:00.391 06:41:04 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:00.391 06:41:04 -- common/autotest_common.sh@1194 -- # return 0 00:15:00.391 06:41:04 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:00.391 06:41:04 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:00.391 06:41:04 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:00.391 06:41:04 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:00.391 06:41:04 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:15:00.391 06:41:04 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:00.391 06:41:04 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:00.391 [ 0]:0x1 00:15:00.391 06:41:04 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:00.391 06:41:04 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:00.391 06:41:04 -- target/ns_masking.sh@40 -- # nguid=ef6d6ce0dd9f428892c2b6e1c932621d 00:15:00.391 06:41:04 -- target/ns_masking.sh@41 -- # [[ ef6d6ce0dd9f428892c2b6e1c932621d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:00.391 06:41:04 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:15:00.391 06:41:04 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:00.391 06:41:04 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:00.391 [ 1]:0x2 00:15:00.391 06:41:04 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:00.391 06:41:04 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:00.391 06:41:04 -- target/ns_masking.sh@40 -- # nguid=8d9397d2a902405ab97bb9b12bc39c5d 00:15:00.391 06:41:04 -- target/ns_masking.sh@41 -- # [[ 8d9397d2a902405ab97bb9b12bc39c5d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:00.391 06:41:04 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:00.648 06:41:05 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:15:00.648 06:41:05 -- common/autotest_common.sh@638 -- # local es=0 00:15:00.648 06:41:05 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:15:00.648 06:41:05 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:15:00.648 06:41:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:00.648 06:41:05 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:15:00.648 06:41:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:00.648 06:41:05 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:15:00.648 06:41:05 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:00.648 06:41:05 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:00.648 06:41:05 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:00.648 06:41:05 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:00.648 06:41:05 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:00.648 06:41:05 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:00.648 06:41:05 -- common/autotest_common.sh@641 -- # es=1 00:15:00.648 06:41:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:00.648 06:41:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:00.648 06:41:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:00.648 06:41:05 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:15:00.648 06:41:05 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:00.648 06:41:05 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:00.648 [ 0]:0x2 00:15:00.648 06:41:05 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:00.648 06:41:05 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:00.648 06:41:05 -- target/ns_masking.sh@40 -- # nguid=8d9397d2a902405ab97bb9b12bc39c5d 00:15:00.648 06:41:05 -- target/ns_masking.sh@41 -- # [[ 8d9397d2a902405ab97bb9b12bc39c5d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:00.648 06:41:05 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:00.648 06:41:05 -- common/autotest_common.sh@638 -- # local es=0 00:15:00.648 06:41:05 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:00.648 06:41:05 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:00.648 06:41:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:00.648 06:41:05 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:00.648 06:41:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:00.648 06:41:05 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:00.648 06:41:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:00.648 06:41:05 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:00.648 06:41:05 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:00.648 06:41:05 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:00.906 [2024-04-17 06:41:05.407319] nvmf_rpc.c:1770:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:00.906 request: 00:15:00.906 { 00:15:00.906 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.906 "nsid": 2, 00:15:00.906 "host": "nqn.2016-06.io.spdk:host1", 00:15:00.906 "method": "nvmf_ns_remove_host", 00:15:00.906 "req_id": 1 00:15:00.906 } 00:15:00.906 Got JSON-RPC error response 00:15:00.906 response: 00:15:00.906 { 00:15:00.906 "code": -32602, 00:15:00.906 "message": "Invalid parameters" 00:15:00.906 } 00:15:00.906 06:41:05 -- common/autotest_common.sh@641 -- # es=1 00:15:00.906 06:41:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:00.906 06:41:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:00.906 06:41:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:00.906 06:41:05 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:15:00.906 06:41:05 -- common/autotest_common.sh@638 -- # local es=0 00:15:00.906 06:41:05 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:15:00.906 06:41:05 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:15:00.906 06:41:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:00.906 06:41:05 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:15:00.906 06:41:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:00.906 06:41:05 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:15:00.906 06:41:05 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:00.906 06:41:05 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:00.906 06:41:05 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:00.906 06:41:05 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:00.906 06:41:05 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:00.906 06:41:05 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:00.906 06:41:05 -- common/autotest_common.sh@641 -- # es=1 00:15:00.906 06:41:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:00.906 06:41:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:00.906 06:41:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:00.906 06:41:05 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:15:00.906 06:41:05 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:00.906 06:41:05 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:01.164 [ 0]:0x2 00:15:01.164 06:41:05 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:01.164 06:41:05 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:01.164 06:41:05 -- target/ns_masking.sh@40 -- # nguid=8d9397d2a902405ab97bb9b12bc39c5d 00:15:01.164 06:41:05 -- target/ns_masking.sh@41 -- # [[ 8d9397d2a902405ab97bb9b12bc39c5d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:01.164 06:41:05 -- target/ns_masking.sh@108 -- # disconnect 00:15:01.164 06:41:05 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:01.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.164 06:41:05 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:01.423 06:41:05 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:01.423 06:41:05 -- target/ns_masking.sh@114 -- # nvmftestfini 00:15:01.423 06:41:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:01.423 06:41:05 -- nvmf/common.sh@117 -- # sync 00:15:01.423 06:41:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:01.423 06:41:05 -- nvmf/common.sh@120 -- # set +e 00:15:01.423 06:41:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:01.423 06:41:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:01.423 rmmod nvme_tcp 00:15:01.423 rmmod nvme_fabrics 00:15:01.423 rmmod nvme_keyring 00:15:01.423 06:41:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:01.423 06:41:05 -- nvmf/common.sh@124 -- # set -e 00:15:01.423 06:41:05 -- nvmf/common.sh@125 -- # return 0 00:15:01.423 06:41:05 -- nvmf/common.sh@478 -- # '[' -n 4148299 ']' 00:15:01.423 06:41:05 -- nvmf/common.sh@479 -- # killprocess 4148299 00:15:01.423 06:41:05 -- common/autotest_common.sh@936 -- # '[' -z 4148299 ']' 00:15:01.423 06:41:05 -- common/autotest_common.sh@940 -- # kill -0 4148299 00:15:01.423 06:41:05 -- common/autotest_common.sh@941 -- # uname 00:15:01.423 06:41:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:01.423 06:41:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4148299 00:15:01.423 06:41:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:01.423 06:41:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:01.423 06:41:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4148299' 00:15:01.423 killing process with pid 4148299 00:15:01.423 06:41:06 -- common/autotest_common.sh@955 -- # kill 4148299 00:15:01.423 06:41:06 -- common/autotest_common.sh@960 -- # wait 4148299 00:15:01.989 06:41:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:01.989 06:41:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:01.989 06:41:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:01.989 06:41:06 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:01.989 06:41:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:01.989 06:41:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.989 06:41:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:01.989 06:41:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.926 06:41:08 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:03.926 00:15:03.926 real 0m16.403s 00:15:03.926 user 0m50.998s 00:15:03.926 sys 0m3.626s 00:15:03.926 06:41:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:03.926 06:41:08 -- common/autotest_common.sh@10 -- # set +x 00:15:03.926 ************************************ 00:15:03.926 END TEST nvmf_ns_masking 00:15:03.926 ************************************ 00:15:03.926 06:41:08 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:15:03.926 06:41:08 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:03.926 06:41:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:03.926 06:41:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:03.926 06:41:08 -- common/autotest_common.sh@10 -- # set +x 00:15:03.926 ************************************ 00:15:03.926 START TEST nvmf_nvme_cli 00:15:03.926 ************************************ 00:15:03.926 06:41:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:03.926 * Looking for test storage... 00:15:03.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:03.926 06:41:08 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:03.926 06:41:08 -- nvmf/common.sh@7 -- # uname -s 00:15:03.926 06:41:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:03.926 06:41:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:03.926 06:41:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:03.926 06:41:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:03.926 06:41:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:03.926 06:41:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:03.926 06:41:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:03.926 06:41:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:03.926 06:41:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:03.926 06:41:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:03.926 06:41:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:03.926 06:41:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:03.926 06:41:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:03.926 06:41:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:03.926 06:41:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:03.926 06:41:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:04.186 06:41:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:04.186 06:41:08 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:04.186 06:41:08 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.186 06:41:08 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.186 06:41:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.186 06:41:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.186 06:41:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.186 06:41:08 -- paths/export.sh@5 -- # export PATH 00:15:04.186 06:41:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.186 06:41:08 -- nvmf/common.sh@47 -- # : 0 00:15:04.186 06:41:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:04.186 06:41:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:04.186 06:41:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:04.186 06:41:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:04.186 06:41:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:04.186 06:41:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:04.186 06:41:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:04.186 06:41:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:04.186 06:41:08 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:04.186 06:41:08 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:04.186 06:41:08 -- target/nvme_cli.sh@14 -- # devs=() 00:15:04.186 06:41:08 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:04.186 06:41:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:04.186 06:41:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:04.186 06:41:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:04.186 06:41:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:04.186 06:41:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:04.186 06:41:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.186 06:41:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:04.186 06:41:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.186 06:41:08 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:04.186 06:41:08 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:04.186 06:41:08 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:04.186 06:41:08 -- common/autotest_common.sh@10 -- # set +x 00:15:06.089 06:41:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:06.089 06:41:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:06.089 06:41:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:06.089 06:41:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:06.089 06:41:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:06.089 06:41:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:06.089 06:41:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:06.089 06:41:10 -- nvmf/common.sh@295 -- # net_devs=() 00:15:06.089 06:41:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:06.089 06:41:10 -- nvmf/common.sh@296 -- # e810=() 00:15:06.089 06:41:10 -- nvmf/common.sh@296 -- # local -ga e810 00:15:06.089 06:41:10 -- nvmf/common.sh@297 -- # x722=() 00:15:06.089 06:41:10 -- nvmf/common.sh@297 -- # local -ga x722 00:15:06.089 06:41:10 -- nvmf/common.sh@298 -- # mlx=() 00:15:06.089 06:41:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:06.089 06:41:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:06.089 06:41:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:06.089 06:41:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:06.089 06:41:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:06.089 06:41:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:06.089 06:41:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:06.089 06:41:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:06.089 06:41:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:06.089 06:41:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:06.089 06:41:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:06.089 06:41:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:06.089 06:41:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:06.089 06:41:10 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:06.089 06:41:10 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:06.089 06:41:10 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:06.089 06:41:10 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:06.089 06:41:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:06.089 06:41:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:06.089 06:41:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:06.089 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:06.089 06:41:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:06.089 06:41:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:06.089 06:41:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:06.089 06:41:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:06.089 06:41:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:06.089 06:41:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:06.089 06:41:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:06.089 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:06.089 06:41:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:06.089 06:41:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:06.089 06:41:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:06.089 06:41:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:06.089 06:41:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:06.089 06:41:10 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:06.089 06:41:10 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:06.089 06:41:10 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:06.089 06:41:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:06.089 06:41:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.089 06:41:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:06.089 06:41:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.089 06:41:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:06.089 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:06.089 06:41:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.089 06:41:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:06.089 06:41:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.089 06:41:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:06.089 06:41:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.089 06:41:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:06.089 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:06.089 06:41:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.089 06:41:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:06.089 06:41:10 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:06.089 06:41:10 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:06.089 06:41:10 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:06.089 06:41:10 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:06.089 06:41:10 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:06.089 06:41:10 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:06.089 06:41:10 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:06.089 06:41:10 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:06.089 06:41:10 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:06.089 06:41:10 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:06.089 06:41:10 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:06.089 06:41:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:06.089 06:41:10 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:06.089 06:41:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:06.089 06:41:10 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:06.089 06:41:10 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:06.089 06:41:10 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:06.089 06:41:10 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:06.089 06:41:10 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:06.089 06:41:10 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:06.089 06:41:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:06.089 06:41:10 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:06.089 06:41:10 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:06.089 06:41:10 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:06.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:06.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:15:06.089 00:15:06.089 --- 10.0.0.2 ping statistics --- 00:15:06.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.089 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:15:06.089 06:41:10 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:06.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:06.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:15:06.089 00:15:06.089 --- 10.0.0.1 ping statistics --- 00:15:06.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.089 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:15:06.089 06:41:10 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:06.089 06:41:10 -- nvmf/common.sh@411 -- # return 0 00:15:06.089 06:41:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:06.089 06:41:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:06.089 06:41:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:06.089 06:41:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:06.089 06:41:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:06.089 06:41:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:06.089 06:41:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:06.089 06:41:10 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:06.089 06:41:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:06.089 06:41:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:06.089 06:41:10 -- common/autotest_common.sh@10 -- # set +x 00:15:06.089 06:41:10 -- nvmf/common.sh@470 -- # nvmfpid=4151855 00:15:06.089 06:41:10 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:06.089 06:41:10 -- nvmf/common.sh@471 -- # waitforlisten 4151855 00:15:06.089 06:41:10 -- common/autotest_common.sh@817 -- # '[' -z 4151855 ']' 00:15:06.089 06:41:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.089 06:41:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:06.089 06:41:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.089 06:41:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:06.089 06:41:10 -- common/autotest_common.sh@10 -- # set +x 00:15:06.348 [2024-04-17 06:41:10.711288] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:15:06.348 [2024-04-17 06:41:10.711359] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.348 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.348 [2024-04-17 06:41:10.775046] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:06.348 [2024-04-17 06:41:10.858751] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.348 [2024-04-17 06:41:10.858796] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.348 [2024-04-17 06:41:10.858819] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:06.348 [2024-04-17 06:41:10.858830] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:06.348 [2024-04-17 06:41:10.858856] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.348 [2024-04-17 06:41:10.858918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.348 [2024-04-17 06:41:10.859054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:06.348 [2024-04-17 06:41:10.859114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:06.348 [2024-04-17 06:41:10.859116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.606 06:41:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:06.606 06:41:10 -- common/autotest_common.sh@850 -- # return 0 00:15:06.606 06:41:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:06.606 06:41:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:06.606 06:41:10 -- common/autotest_common.sh@10 -- # set +x 00:15:06.606 06:41:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:06.606 06:41:11 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:06.606 06:41:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.606 06:41:11 -- common/autotest_common.sh@10 -- # set +x 00:15:06.606 [2024-04-17 06:41:11.012037] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:06.606 06:41:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.606 06:41:11 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:06.606 06:41:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.606 06:41:11 -- common/autotest_common.sh@10 -- # set +x 00:15:06.606 Malloc0 00:15:06.606 06:41:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.606 06:41:11 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:06.606 06:41:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.606 06:41:11 -- common/autotest_common.sh@10 -- # set +x 00:15:06.606 Malloc1 00:15:06.606 06:41:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.606 06:41:11 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:06.606 06:41:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.606 06:41:11 -- common/autotest_common.sh@10 -- # set +x 00:15:06.606 06:41:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.606 06:41:11 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:06.607 06:41:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.607 06:41:11 -- common/autotest_common.sh@10 -- # set +x 00:15:06.607 06:41:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.607 06:41:11 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:06.607 06:41:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.607 06:41:11 -- common/autotest_common.sh@10 -- # set +x 00:15:06.607 06:41:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.607 06:41:11 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:06.607 06:41:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.607 06:41:11 -- common/autotest_common.sh@10 -- # set +x 00:15:06.607 [2024-04-17 06:41:11.097888] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:06.607 06:41:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.607 06:41:11 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:06.607 06:41:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.607 06:41:11 -- common/autotest_common.sh@10 -- # set +x 00:15:06.607 06:41:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.607 06:41:11 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:15:06.864 00:15:06.864 Discovery Log Number of Records 2, Generation counter 2 00:15:06.864 =====Discovery Log Entry 0====== 00:15:06.864 trtype: tcp 00:15:06.864 adrfam: ipv4 00:15:06.864 subtype: current discovery subsystem 00:15:06.864 treq: not required 00:15:06.864 portid: 0 00:15:06.864 trsvcid: 4420 00:15:06.864 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:06.864 traddr: 10.0.0.2 00:15:06.864 eflags: explicit discovery connections, duplicate discovery information 00:15:06.864 sectype: none 00:15:06.864 =====Discovery Log Entry 1====== 00:15:06.864 trtype: tcp 00:15:06.864 adrfam: ipv4 00:15:06.864 subtype: nvme subsystem 00:15:06.864 treq: not required 00:15:06.864 portid: 0 00:15:06.864 trsvcid: 4420 00:15:06.864 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:06.864 traddr: 10.0.0.2 00:15:06.864 eflags: none 00:15:06.864 sectype: none 00:15:06.864 06:41:11 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:06.864 06:41:11 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:06.864 06:41:11 -- nvmf/common.sh@511 -- # local dev _ 00:15:06.864 06:41:11 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:06.864 06:41:11 -- nvmf/common.sh@510 -- # nvme list 00:15:06.864 06:41:11 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:15:06.864 06:41:11 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:06.864 06:41:11 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:15:06.864 06:41:11 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:06.864 06:41:11 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:06.864 06:41:11 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:07.430 06:41:11 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:07.430 06:41:11 -- common/autotest_common.sh@1184 -- # local i=0 00:15:07.430 06:41:11 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:07.430 06:41:11 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:15:07.430 06:41:11 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:15:07.430 06:41:11 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:09.328 06:41:13 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:09.328 06:41:13 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:09.328 06:41:13 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:09.328 06:41:13 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:15:09.328 06:41:13 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:09.328 06:41:13 -- common/autotest_common.sh@1194 -- # return 0 00:15:09.328 06:41:13 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:09.328 06:41:13 -- nvmf/common.sh@511 -- # local dev _ 00:15:09.328 06:41:13 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:09.328 06:41:13 -- nvmf/common.sh@510 -- # nvme list 00:15:09.586 06:41:14 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:15:09.586 06:41:14 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:09.586 06:41:14 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:15:09.586 06:41:14 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:09.586 06:41:14 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:09.586 06:41:14 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:15:09.586 06:41:14 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:09.586 06:41:14 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:09.586 06:41:14 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:15:09.586 06:41:14 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:09.586 06:41:14 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:09.586 /dev/nvme0n1 ]] 00:15:09.586 06:41:14 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:09.586 06:41:14 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:09.586 06:41:14 -- nvmf/common.sh@511 -- # local dev _ 00:15:09.586 06:41:14 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:09.586 06:41:14 -- nvmf/common.sh@510 -- # nvme list 00:15:09.852 06:41:14 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:15:09.852 06:41:14 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:09.852 06:41:14 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:15:09.852 06:41:14 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:09.852 06:41:14 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:09.852 06:41:14 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:15:09.852 06:41:14 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:09.852 06:41:14 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:09.852 06:41:14 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:15:09.852 06:41:14 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:09.852 06:41:14 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:09.852 06:41:14 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:10.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.114 06:41:14 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:10.114 06:41:14 -- common/autotest_common.sh@1205 -- # local i=0 00:15:10.114 06:41:14 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:15:10.114 06:41:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:10.114 06:41:14 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:15:10.114 06:41:14 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:10.114 06:41:14 -- common/autotest_common.sh@1217 -- # return 0 00:15:10.114 06:41:14 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:10.114 06:41:14 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:10.114 06:41:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:10.114 06:41:14 -- common/autotest_common.sh@10 -- # set +x 00:15:10.114 06:41:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:10.114 06:41:14 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:10.114 06:41:14 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:10.114 06:41:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:10.114 06:41:14 -- nvmf/common.sh@117 -- # sync 00:15:10.114 06:41:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:10.114 06:41:14 -- nvmf/common.sh@120 -- # set +e 00:15:10.114 06:41:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:10.114 06:41:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:10.114 rmmod nvme_tcp 00:15:10.114 rmmod nvme_fabrics 00:15:10.114 rmmod nvme_keyring 00:15:10.114 06:41:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:10.114 06:41:14 -- nvmf/common.sh@124 -- # set -e 00:15:10.114 06:41:14 -- nvmf/common.sh@125 -- # return 0 00:15:10.114 06:41:14 -- nvmf/common.sh@478 -- # '[' -n 4151855 ']' 00:15:10.114 06:41:14 -- nvmf/common.sh@479 -- # killprocess 4151855 00:15:10.114 06:41:14 -- common/autotest_common.sh@936 -- # '[' -z 4151855 ']' 00:15:10.114 06:41:14 -- common/autotest_common.sh@940 -- # kill -0 4151855 00:15:10.114 06:41:14 -- common/autotest_common.sh@941 -- # uname 00:15:10.114 06:41:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:10.114 06:41:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4151855 00:15:10.114 06:41:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:10.114 06:41:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:10.114 06:41:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4151855' 00:15:10.114 killing process with pid 4151855 00:15:10.114 06:41:14 -- common/autotest_common.sh@955 -- # kill 4151855 00:15:10.114 06:41:14 -- common/autotest_common.sh@960 -- # wait 4151855 00:15:10.373 06:41:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:10.373 06:41:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:10.373 06:41:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:10.373 06:41:14 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:10.373 06:41:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:10.373 06:41:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.373 06:41:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:10.373 06:41:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.905 06:41:16 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:12.905 00:15:12.905 real 0m8.450s 00:15:12.905 user 0m16.399s 00:15:12.905 sys 0m2.152s 00:15:12.905 06:41:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:12.905 06:41:16 -- common/autotest_common.sh@10 -- # set +x 00:15:12.905 ************************************ 00:15:12.905 END TEST nvmf_nvme_cli 00:15:12.905 ************************************ 00:15:12.905 06:41:16 -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:15:12.905 06:41:16 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:12.905 06:41:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:12.905 06:41:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:12.905 06:41:16 -- common/autotest_common.sh@10 -- # set +x 00:15:12.905 ************************************ 00:15:12.905 START TEST nvmf_vfio_user 00:15:12.905 ************************************ 00:15:12.905 06:41:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:12.905 * Looking for test storage... 00:15:12.905 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:12.905 06:41:17 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:12.905 06:41:17 -- nvmf/common.sh@7 -- # uname -s 00:15:12.905 06:41:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.905 06:41:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.905 06:41:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.905 06:41:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.905 06:41:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.905 06:41:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.905 06:41:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.905 06:41:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.905 06:41:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.905 06:41:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.905 06:41:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:12.905 06:41:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:12.905 06:41:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.905 06:41:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.905 06:41:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:12.905 06:41:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:12.905 06:41:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:12.905 06:41:17 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.905 06:41:17 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.905 06:41:17 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.905 06:41:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.906 06:41:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.906 06:41:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.906 06:41:17 -- paths/export.sh@5 -- # export PATH 00:15:12.906 06:41:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.906 06:41:17 -- nvmf/common.sh@47 -- # : 0 00:15:12.906 06:41:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:12.906 06:41:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:12.906 06:41:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:12.906 06:41:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.906 06:41:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.906 06:41:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:12.906 06:41:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:12.906 06:41:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:12.906 06:41:17 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:12.906 06:41:17 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:12.906 06:41:17 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:12.906 06:41:17 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:12.906 06:41:17 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:12.906 06:41:17 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:12.906 06:41:17 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:12.906 06:41:17 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:12.906 06:41:17 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:12.906 06:41:17 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:12.906 06:41:17 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=4152667 00:15:12.906 06:41:17 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:12.906 06:41:17 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 4152667' 00:15:12.906 Process pid: 4152667 00:15:12.906 06:41:17 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:12.906 06:41:17 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 4152667 00:15:12.906 06:41:17 -- common/autotest_common.sh@817 -- # '[' -z 4152667 ']' 00:15:12.906 06:41:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.906 06:41:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:12.906 06:41:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.906 06:41:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:12.906 06:41:17 -- common/autotest_common.sh@10 -- # set +x 00:15:12.906 [2024-04-17 06:41:17.170601] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:15:12.906 [2024-04-17 06:41:17.170698] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.906 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.906 [2024-04-17 06:41:17.239788] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:12.906 [2024-04-17 06:41:17.335646] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.906 [2024-04-17 06:41:17.335721] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.906 [2024-04-17 06:41:17.335738] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.906 [2024-04-17 06:41:17.335761] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.906 [2024-04-17 06:41:17.335772] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.906 [2024-04-17 06:41:17.339202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.906 [2024-04-17 06:41:17.339242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.906 [2024-04-17 06:41:17.339296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:12.906 [2024-04-17 06:41:17.339300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.906 06:41:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:12.906 06:41:17 -- common/autotest_common.sh@850 -- # return 0 00:15:12.906 06:41:17 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:14.278 06:41:18 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:14.278 06:41:18 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:14.278 06:41:18 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:14.278 06:41:18 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:14.278 06:41:18 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:14.278 06:41:18 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:14.536 Malloc1 00:15:14.536 06:41:19 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:14.793 06:41:19 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:15.051 06:41:19 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:15.309 06:41:19 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:15.309 06:41:19 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:15.309 06:41:19 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:15.567 Malloc2 00:15:15.567 06:41:20 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:15.825 06:41:20 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:16.082 06:41:20 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:16.340 06:41:20 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:16.340 06:41:20 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:16.340 06:41:20 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:16.340 06:41:20 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:16.340 06:41:20 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:16.340 06:41:20 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:16.340 [2024-04-17 06:41:20.881299] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:15:16.340 [2024-04-17 06:41:20.881338] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4153209 ] 00:15:16.340 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.340 [2024-04-17 06:41:20.915498] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:16.340 [2024-04-17 06:41:20.924546] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:16.340 [2024-04-17 06:41:20.924575] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fbd6be91000 00:15:16.340 [2024-04-17 06:41:20.925553] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:16.340 [2024-04-17 06:41:20.926549] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:16.340 [2024-04-17 06:41:20.929201] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:16.340 [2024-04-17 06:41:20.929559] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:16.340 [2024-04-17 06:41:20.930567] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:16.340 [2024-04-17 06:41:20.931582] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:16.340 [2024-04-17 06:41:20.932586] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:16.340 [2024-04-17 06:41:20.933596] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:16.340 [2024-04-17 06:41:20.934607] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:16.340 [2024-04-17 06:41:20.934628] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fbd6ac43000 00:15:16.340 [2024-04-17 06:41:20.936024] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:16.600 [2024-04-17 06:41:20.954961] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:16.600 [2024-04-17 06:41:20.955022] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:16.600 [2024-04-17 06:41:20.959764] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:16.600 [2024-04-17 06:41:20.959819] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:16.600 [2024-04-17 06:41:20.959907] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:16.600 [2024-04-17 06:41:20.959935] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:16.600 [2024-04-17 06:41:20.959945] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:16.600 [2024-04-17 06:41:20.960757] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:16.600 [2024-04-17 06:41:20.960776] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:16.600 [2024-04-17 06:41:20.960788] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:16.600 [2024-04-17 06:41:20.961759] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:16.600 [2024-04-17 06:41:20.961778] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:16.600 [2024-04-17 06:41:20.961792] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:16.600 [2024-04-17 06:41:20.962763] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:16.600 [2024-04-17 06:41:20.962783] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:16.600 [2024-04-17 06:41:20.963766] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:16.600 [2024-04-17 06:41:20.963785] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:16.600 [2024-04-17 06:41:20.963795] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:16.600 [2024-04-17 06:41:20.963806] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:16.600 [2024-04-17 06:41:20.963914] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:16.600 [2024-04-17 06:41:20.963922] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:16.600 [2024-04-17 06:41:20.963930] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:16.600 [2024-04-17 06:41:20.964774] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:16.600 [2024-04-17 06:41:20.965778] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:16.600 [2024-04-17 06:41:20.966781] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:16.600 [2024-04-17 06:41:20.967775] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:16.600 [2024-04-17 06:41:20.967882] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:16.600 [2024-04-17 06:41:20.968811] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:16.600 [2024-04-17 06:41:20.968830] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:16.600 [2024-04-17 06:41:20.968839] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:16.600 [2024-04-17 06:41:20.968863] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:16.600 [2024-04-17 06:41:20.968876] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:16.600 [2024-04-17 06:41:20.968901] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:16.600 [2024-04-17 06:41:20.968910] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:16.600 [2024-04-17 06:41:20.968928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:16.600 [2024-04-17 06:41:20.968986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:16.600 [2024-04-17 06:41:20.969000] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:16.600 [2024-04-17 06:41:20.969008] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:16.600 [2024-04-17 06:41:20.969015] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:16.600 [2024-04-17 06:41:20.969022] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:16.600 [2024-04-17 06:41:20.969029] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:16.600 [2024-04-17 06:41:20.969041] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:16.600 [2024-04-17 06:41:20.969049] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:16.600 [2024-04-17 06:41:20.969061] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:16.600 [2024-04-17 06:41:20.969074] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:16.600 [2024-04-17 06:41:20.969092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:16.600 [2024-04-17 06:41:20.969111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.600 [2024-04-17 06:41:20.969124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.600 [2024-04-17 06:41:20.969135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.600 [2024-04-17 06:41:20.969147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.600 [2024-04-17 06:41:20.969169] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:16.600 [2024-04-17 06:41:20.969202] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:16.600 [2024-04-17 06:41:20.969218] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:16.600 [2024-04-17 06:41:20.969230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:16.600 [2024-04-17 06:41:20.969240] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:16.600 [2024-04-17 06:41:20.969248] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:16.600 [2024-04-17 06:41:20.969263] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:16.600 [2024-04-17 06:41:20.969273] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:16.600 [2024-04-17 06:41:20.969286] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:16.600 [2024-04-17 06:41:20.969299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:16.600 [2024-04-17 06:41:20.969350] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:16.600 [2024-04-17 06:41:20.969363] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:16.600 [2024-04-17 06:41:20.969376] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:16.600 [2024-04-17 06:41:20.969384] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:16.600 [2024-04-17 06:41:20.969394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:16.600 [2024-04-17 06:41:20.969409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:16.600 [2024-04-17 06:41:20.969424] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:16.600 [2024-04-17 06:41:20.969443] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:16.600 [2024-04-17 06:41:20.969456] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:16.600 [2024-04-17 06:41:20.969473] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:16.600 [2024-04-17 06:41:20.969481] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:16.600 [2024-04-17 06:41:20.969490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:16.600 [2024-04-17 06:41:20.969529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:16.600 [2024-04-17 06:41:20.969557] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:16.600 [2024-04-17 06:41:20.969577] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:16.600 [2024-04-17 06:41:20.969590] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:16.600 [2024-04-17 06:41:20.969598] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:16.601 [2024-04-17 06:41:20.969607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:16.601 [2024-04-17 06:41:20.969620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:16.601 [2024-04-17 06:41:20.969633] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:16.601 [2024-04-17 06:41:20.969643] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:16.601 [2024-04-17 06:41:20.969656] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:16.601 [2024-04-17 06:41:20.969666] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:16.601 [2024-04-17 06:41:20.969674] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:16.601 [2024-04-17 06:41:20.969682] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:16.601 [2024-04-17 06:41:20.969689] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:16.601 [2024-04-17 06:41:20.969697] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:16.601 [2024-04-17 06:41:20.969721] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:16.601 [2024-04-17 06:41:20.969737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:16.601 [2024-04-17 06:41:20.969755] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:16.601 [2024-04-17 06:41:20.969766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:16.601 [2024-04-17 06:41:20.969785] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:16.601 [2024-04-17 06:41:20.969796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:16.601 [2024-04-17 06:41:20.969812] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:16.601 [2024-04-17 06:41:20.969825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:16.601 [2024-04-17 06:41:20.969841] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:16.601 [2024-04-17 06:41:20.969850] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:16.601 [2024-04-17 06:41:20.969855] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:16.601 [2024-04-17 06:41:20.969861] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:16.601 [2024-04-17 06:41:20.969870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:16.601 [2024-04-17 06:41:20.969881] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:16.601 [2024-04-17 06:41:20.969888] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:16.601 [2024-04-17 06:41:20.969897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:16.601 [2024-04-17 06:41:20.969907] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:16.601 [2024-04-17 06:41:20.969915] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:16.601 [2024-04-17 06:41:20.969923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:16.601 [2024-04-17 06:41:20.969934] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:16.601 [2024-04-17 06:41:20.969942] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:16.601 [2024-04-17 06:41:20.969950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:16.601 [2024-04-17 06:41:20.969961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:16.601 [2024-04-17 06:41:20.969980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:16.601 [2024-04-17 06:41:20.969995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:16.601 [2024-04-17 06:41:20.970006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:16.601 ===================================================== 00:15:16.601 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:16.601 ===================================================== 00:15:16.601 Controller Capabilities/Features 00:15:16.601 ================================ 00:15:16.601 Vendor ID: 4e58 00:15:16.601 Subsystem Vendor ID: 4e58 00:15:16.601 Serial Number: SPDK1 00:15:16.601 Model Number: SPDK bdev Controller 00:15:16.601 Firmware Version: 24.05 00:15:16.601 Recommended Arb Burst: 6 00:15:16.601 IEEE OUI Identifier: 8d 6b 50 00:15:16.601 Multi-path I/O 00:15:16.601 May have multiple subsystem ports: Yes 00:15:16.601 May have multiple controllers: Yes 00:15:16.601 Associated with SR-IOV VF: No 00:15:16.601 Max Data Transfer Size: 131072 00:15:16.601 Max Number of Namespaces: 32 00:15:16.601 Max Number of I/O Queues: 127 00:15:16.601 NVMe Specification Version (VS): 1.3 00:15:16.601 NVMe Specification Version (Identify): 1.3 00:15:16.601 Maximum Queue Entries: 256 00:15:16.601 Contiguous Queues Required: Yes 00:15:16.601 Arbitration Mechanisms Supported 00:15:16.601 Weighted Round Robin: Not Supported 00:15:16.601 Vendor Specific: Not Supported 00:15:16.601 Reset Timeout: 15000 ms 00:15:16.601 Doorbell Stride: 4 bytes 00:15:16.601 NVM Subsystem Reset: Not Supported 00:15:16.601 Command Sets Supported 00:15:16.601 NVM Command Set: Supported 00:15:16.601 Boot Partition: Not Supported 00:15:16.601 Memory Page Size Minimum: 4096 bytes 00:15:16.601 Memory Page Size Maximum: 4096 bytes 00:15:16.601 Persistent Memory Region: Not Supported 00:15:16.601 Optional Asynchronous Events Supported 00:15:16.601 Namespace Attribute Notices: Supported 00:15:16.601 Firmware Activation Notices: Not Supported 00:15:16.601 ANA Change Notices: Not Supported 00:15:16.601 PLE Aggregate Log Change Notices: Not Supported 00:15:16.601 LBA Status Info Alert Notices: Not Supported 00:15:16.601 EGE Aggregate Log Change Notices: Not Supported 00:15:16.601 Normal NVM Subsystem Shutdown event: Not Supported 00:15:16.601 Zone Descriptor Change Notices: Not Supported 00:15:16.601 Discovery Log Change Notices: Not Supported 00:15:16.601 Controller Attributes 00:15:16.601 128-bit Host Identifier: Supported 00:15:16.601 Non-Operational Permissive Mode: Not Supported 00:15:16.601 NVM Sets: Not Supported 00:15:16.601 Read Recovery Levels: Not Supported 00:15:16.601 Endurance Groups: Not Supported 00:15:16.601 Predictable Latency Mode: Not Supported 00:15:16.601 Traffic Based Keep ALive: Not Supported 00:15:16.601 Namespace Granularity: Not Supported 00:15:16.601 SQ Associations: Not Supported 00:15:16.601 UUID List: Not Supported 00:15:16.601 Multi-Domain Subsystem: Not Supported 00:15:16.601 Fixed Capacity Management: Not Supported 00:15:16.601 Variable Capacity Management: Not Supported 00:15:16.601 Delete Endurance Group: Not Supported 00:15:16.601 Delete NVM Set: Not Supported 00:15:16.601 Extended LBA Formats Supported: Not Supported 00:15:16.601 Flexible Data Placement Supported: Not Supported 00:15:16.601 00:15:16.601 Controller Memory Buffer Support 00:15:16.601 ================================ 00:15:16.601 Supported: No 00:15:16.601 00:15:16.601 Persistent Memory Region Support 00:15:16.601 ================================ 00:15:16.601 Supported: No 00:15:16.601 00:15:16.601 Admin Command Set Attributes 00:15:16.601 ============================ 00:15:16.601 Security Send/Receive: Not Supported 00:15:16.601 Format NVM: Not Supported 00:15:16.601 Firmware Activate/Download: Not Supported 00:15:16.601 Namespace Management: Not Supported 00:15:16.601 Device Self-Test: Not Supported 00:15:16.601 Directives: Not Supported 00:15:16.601 NVMe-MI: Not Supported 00:15:16.601 Virtualization Management: Not Supported 00:15:16.601 Doorbell Buffer Config: Not Supported 00:15:16.601 Get LBA Status Capability: Not Supported 00:15:16.601 Command & Feature Lockdown Capability: Not Supported 00:15:16.601 Abort Command Limit: 4 00:15:16.601 Async Event Request Limit: 4 00:15:16.601 Number of Firmware Slots: N/A 00:15:16.601 Firmware Slot 1 Read-Only: N/A 00:15:16.601 Firmware Activation Without Reset: N/A 00:15:16.601 Multiple Update Detection Support: N/A 00:15:16.601 Firmware Update Granularity: No Information Provided 00:15:16.601 Per-Namespace SMART Log: No 00:15:16.601 Asymmetric Namespace Access Log Page: Not Supported 00:15:16.601 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:16.601 Command Effects Log Page: Supported 00:15:16.601 Get Log Page Extended Data: Supported 00:15:16.601 Telemetry Log Pages: Not Supported 00:15:16.601 Persistent Event Log Pages: Not Supported 00:15:16.601 Supported Log Pages Log Page: May Support 00:15:16.601 Commands Supported & Effects Log Page: Not Supported 00:15:16.601 Feature Identifiers & Effects Log Page:May Support 00:15:16.601 NVMe-MI Commands & Effects Log Page: May Support 00:15:16.601 Data Area 4 for Telemetry Log: Not Supported 00:15:16.601 Error Log Page Entries Supported: 128 00:15:16.601 Keep Alive: Supported 00:15:16.601 Keep Alive Granularity: 10000 ms 00:15:16.601 00:15:16.602 NVM Command Set Attributes 00:15:16.602 ========================== 00:15:16.602 Submission Queue Entry Size 00:15:16.602 Max: 64 00:15:16.602 Min: 64 00:15:16.602 Completion Queue Entry Size 00:15:16.602 Max: 16 00:15:16.602 Min: 16 00:15:16.602 Number of Namespaces: 32 00:15:16.602 Compare Command: Supported 00:15:16.602 Write Uncorrectable Command: Not Supported 00:15:16.602 Dataset Management Command: Supported 00:15:16.602 Write Zeroes Command: Supported 00:15:16.602 Set Features Save Field: Not Supported 00:15:16.602 Reservations: Not Supported 00:15:16.602 Timestamp: Not Supported 00:15:16.602 Copy: Supported 00:15:16.602 Volatile Write Cache: Present 00:15:16.602 Atomic Write Unit (Normal): 1 00:15:16.602 Atomic Write Unit (PFail): 1 00:15:16.602 Atomic Compare & Write Unit: 1 00:15:16.602 Fused Compare & Write: Supported 00:15:16.602 Scatter-Gather List 00:15:16.602 SGL Command Set: Supported (Dword aligned) 00:15:16.602 SGL Keyed: Not Supported 00:15:16.602 SGL Bit Bucket Descriptor: Not Supported 00:15:16.602 SGL Metadata Pointer: Not Supported 00:15:16.602 Oversized SGL: Not Supported 00:15:16.602 SGL Metadata Address: Not Supported 00:15:16.602 SGL Offset: Not Supported 00:15:16.602 Transport SGL Data Block: Not Supported 00:15:16.602 Replay Protected Memory Block: Not Supported 00:15:16.602 00:15:16.602 Firmware Slot Information 00:15:16.602 ========================= 00:15:16.602 Active slot: 1 00:15:16.602 Slot 1 Firmware Revision: 24.05 00:15:16.602 00:15:16.602 00:15:16.602 Commands Supported and Effects 00:15:16.602 ============================== 00:15:16.602 Admin Commands 00:15:16.602 -------------- 00:15:16.602 Get Log Page (02h): Supported 00:15:16.602 Identify (06h): Supported 00:15:16.602 Abort (08h): Supported 00:15:16.602 Set Features (09h): Supported 00:15:16.602 Get Features (0Ah): Supported 00:15:16.602 Asynchronous Event Request (0Ch): Supported 00:15:16.602 Keep Alive (18h): Supported 00:15:16.602 I/O Commands 00:15:16.602 ------------ 00:15:16.602 Flush (00h): Supported LBA-Change 00:15:16.602 Write (01h): Supported LBA-Change 00:15:16.602 Read (02h): Supported 00:15:16.602 Compare (05h): Supported 00:15:16.602 Write Zeroes (08h): Supported LBA-Change 00:15:16.602 Dataset Management (09h): Supported LBA-Change 00:15:16.602 Copy (19h): Supported LBA-Change 00:15:16.602 Unknown (79h): Supported LBA-Change 00:15:16.602 Unknown (7Ah): Supported 00:15:16.602 00:15:16.602 Error Log 00:15:16.602 ========= 00:15:16.602 00:15:16.602 Arbitration 00:15:16.602 =========== 00:15:16.602 Arbitration Burst: 1 00:15:16.602 00:15:16.602 Power Management 00:15:16.602 ================ 00:15:16.602 Number of Power States: 1 00:15:16.602 Current Power State: Power State #0 00:15:16.602 Power State #0: 00:15:16.602 Max Power: 0.00 W 00:15:16.602 Non-Operational State: Operational 00:15:16.602 Entry Latency: Not Reported 00:15:16.602 Exit Latency: Not Reported 00:15:16.602 Relative Read Throughput: 0 00:15:16.602 Relative Read Latency: 0 00:15:16.602 Relative Write Throughput: 0 00:15:16.602 Relative Write Latency: 0 00:15:16.602 Idle Power: Not Reported 00:15:16.602 Active Power: Not Reported 00:15:16.602 Non-Operational Permissive Mode: Not Supported 00:15:16.602 00:15:16.602 Health Information 00:15:16.602 ================== 00:15:16.602 Critical Warnings: 00:15:16.602 Available Spare Space: OK 00:15:16.602 Temperature: OK 00:15:16.602 Device Reliability: OK 00:15:16.602 Read Only: No 00:15:16.602 Volatile Memory Backup: OK 00:15:16.602 Current Temperature: 0 Kelvin (-2[2024-04-17 06:41:20.970131] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:16.602 [2024-04-17 06:41:20.970147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:16.602 [2024-04-17 06:41:20.970210] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:16.602 [2024-04-17 06:41:20.970229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.602 [2024-04-17 06:41:20.970240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.602 [2024-04-17 06:41:20.970249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.602 [2024-04-17 06:41:20.970263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.602 [2024-04-17 06:41:20.973244] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:16.602 [2024-04-17 06:41:20.973267] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:16.602 [2024-04-17 06:41:20.973821] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:16.602 [2024-04-17 06:41:20.973899] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:16.602 [2024-04-17 06:41:20.973914] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:16.602 [2024-04-17 06:41:20.974815] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:16.602 [2024-04-17 06:41:20.974836] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:16.602 [2024-04-17 06:41:20.974892] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:16.602 [2024-04-17 06:41:20.978205] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:16.602 73 Celsius) 00:15:16.602 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:16.602 Available Spare: 0% 00:15:16.602 Available Spare Threshold: 0% 00:15:16.602 Life Percentage Used: 0% 00:15:16.602 Data Units Read: 0 00:15:16.602 Data Units Written: 0 00:15:16.602 Host Read Commands: 0 00:15:16.602 Host Write Commands: 0 00:15:16.602 Controller Busy Time: 0 minutes 00:15:16.602 Power Cycles: 0 00:15:16.602 Power On Hours: 0 hours 00:15:16.602 Unsafe Shutdowns: 0 00:15:16.602 Unrecoverable Media Errors: 0 00:15:16.602 Lifetime Error Log Entries: 0 00:15:16.602 Warning Temperature Time: 0 minutes 00:15:16.602 Critical Temperature Time: 0 minutes 00:15:16.602 00:15:16.602 Number of Queues 00:15:16.602 ================ 00:15:16.602 Number of I/O Submission Queues: 127 00:15:16.602 Number of I/O Completion Queues: 127 00:15:16.602 00:15:16.602 Active Namespaces 00:15:16.602 ================= 00:15:16.602 Namespace ID:1 00:15:16.602 Error Recovery Timeout: Unlimited 00:15:16.602 Command Set Identifier: NVM (00h) 00:15:16.602 Deallocate: Supported 00:15:16.602 Deallocated/Unwritten Error: Not Supported 00:15:16.602 Deallocated Read Value: Unknown 00:15:16.602 Deallocate in Write Zeroes: Not Supported 00:15:16.602 Deallocated Guard Field: 0xFFFF 00:15:16.602 Flush: Supported 00:15:16.602 Reservation: Supported 00:15:16.602 Namespace Sharing Capabilities: Multiple Controllers 00:15:16.602 Size (in LBAs): 131072 (0GiB) 00:15:16.602 Capacity (in LBAs): 131072 (0GiB) 00:15:16.602 Utilization (in LBAs): 131072 (0GiB) 00:15:16.602 NGUID: 0F2E9C0F05D340D2BC4D208A803049EC 00:15:16.602 UUID: 0f2e9c0f-05d3-40d2-bc4d-208a803049ec 00:15:16.602 Thin Provisioning: Not Supported 00:15:16.602 Per-NS Atomic Units: Yes 00:15:16.602 Atomic Boundary Size (Normal): 0 00:15:16.602 Atomic Boundary Size (PFail): 0 00:15:16.602 Atomic Boundary Offset: 0 00:15:16.602 Maximum Single Source Range Length: 65535 00:15:16.602 Maximum Copy Length: 65535 00:15:16.602 Maximum Source Range Count: 1 00:15:16.602 NGUID/EUI64 Never Reused: No 00:15:16.602 Namespace Write Protected: No 00:15:16.602 Number of LBA Formats: 1 00:15:16.602 Current LBA Format: LBA Format #00 00:15:16.602 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:16.602 00:15:16.602 06:41:21 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:16.602 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.885 [2024-04-17 06:41:21.208028] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:22.180 [2024-04-17 06:41:26.230750] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:22.180 Initializing NVMe Controllers 00:15:22.180 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:22.180 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:22.180 Initialization complete. Launching workers. 00:15:22.180 ======================================================== 00:15:22.180 Latency(us) 00:15:22.180 Device Information : IOPS MiB/s Average min max 00:15:22.180 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33674.40 131.54 3802.45 1205.54 9328.24 00:15:22.180 ======================================================== 00:15:22.180 Total : 33674.40 131.54 3802.45 1205.54 9328.24 00:15:22.180 00:15:22.180 06:41:26 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:22.180 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.180 [2024-04-17 06:41:26.473860] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:27.439 [2024-04-17 06:41:31.515101] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:27.439 Initializing NVMe Controllers 00:15:27.439 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:27.439 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:27.439 Initialization complete. Launching workers. 00:15:27.439 ======================================================== 00:15:27.439 Latency(us) 00:15:27.439 Device Information : IOPS MiB/s Average min max 00:15:27.439 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16031.86 62.62 7989.36 4984.58 14764.23 00:15:27.439 ======================================================== 00:15:27.439 Total : 16031.86 62.62 7989.36 4984.58 14764.23 00:15:27.439 00:15:27.439 06:41:31 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:27.439 EAL: No free 2048 kB hugepages reported on node 1 00:15:27.439 [2024-04-17 06:41:31.731146] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:32.701 [2024-04-17 06:41:36.806500] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:32.701 Initializing NVMe Controllers 00:15:32.701 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:32.701 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:32.701 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:32.701 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:32.701 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:32.701 Initialization complete. Launching workers. 00:15:32.701 Starting thread on core 2 00:15:32.701 Starting thread on core 3 00:15:32.701 Starting thread on core 1 00:15:32.701 06:41:36 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:32.701 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.701 [2024-04-17 06:41:37.101603] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:35.981 [2024-04-17 06:41:40.161350] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:35.981 Initializing NVMe Controllers 00:15:35.981 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:35.981 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:35.981 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:35.981 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:35.981 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:35.981 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:35.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:35.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:35.981 Initialization complete. Launching workers. 00:15:35.981 Starting thread on core 1 with urgent priority queue 00:15:35.981 Starting thread on core 2 with urgent priority queue 00:15:35.981 Starting thread on core 3 with urgent priority queue 00:15:35.981 Starting thread on core 0 with urgent priority queue 00:15:35.981 SPDK bdev Controller (SPDK1 ) core 0: 4506.00 IO/s 22.19 secs/100000 ios 00:15:35.981 SPDK bdev Controller (SPDK1 ) core 1: 5293.67 IO/s 18.89 secs/100000 ios 00:15:35.981 SPDK bdev Controller (SPDK1 ) core 2: 5079.00 IO/s 19.69 secs/100000 ios 00:15:35.981 SPDK bdev Controller (SPDK1 ) core 3: 5086.67 IO/s 19.66 secs/100000 ios 00:15:35.981 ======================================================== 00:15:35.981 00:15:35.982 06:41:40 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:35.982 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.982 [2024-04-17 06:41:40.465628] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:35.982 [2024-04-17 06:41:40.500294] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:35.982 Initializing NVMe Controllers 00:15:35.982 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:35.982 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:35.982 Namespace ID: 1 size: 0GB 00:15:35.982 Initialization complete. 00:15:35.982 INFO: using host memory buffer for IO 00:15:35.982 Hello world! 00:15:35.982 06:41:40 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:36.239 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.239 [2024-04-17 06:41:40.792616] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:37.612 Initializing NVMe Controllers 00:15:37.612 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:37.612 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:37.612 Initialization complete. Launching workers. 00:15:37.612 submit (in ns) avg, min, max = 8111.2, 3476.7, 4017967.8 00:15:37.612 complete (in ns) avg, min, max = 23351.7, 2041.1, 4017403.3 00:15:37.612 00:15:37.612 Submit histogram 00:15:37.612 ================ 00:15:37.612 Range in us Cumulative Count 00:15:37.612 3.461 - 3.484: 0.0670% ( 9) 00:15:37.612 3.484 - 3.508: 0.3795% ( 42) 00:15:37.612 3.508 - 3.532: 1.5405% ( 156) 00:15:37.612 3.532 - 3.556: 4.1155% ( 346) 00:15:37.612 3.556 - 3.579: 9.9725% ( 787) 00:15:37.612 3.579 - 3.603: 18.0621% ( 1087) 00:15:37.612 3.603 - 3.627: 27.3424% ( 1247) 00:15:37.612 3.627 - 3.650: 38.3121% ( 1474) 00:15:37.612 3.650 - 3.674: 45.9626% ( 1028) 00:15:37.612 3.674 - 3.698: 52.6606% ( 900) 00:15:37.612 3.698 - 3.721: 56.8877% ( 568) 00:15:37.612 3.721 - 3.745: 60.7725% ( 522) 00:15:37.612 3.745 - 3.769: 64.0694% ( 443) 00:15:37.612 3.769 - 3.793: 66.9644% ( 389) 00:15:37.612 3.793 - 3.816: 69.7849% ( 379) 00:15:37.612 3.816 - 3.840: 73.2083% ( 460) 00:15:37.612 3.840 - 3.864: 77.9341% ( 635) 00:15:37.612 3.864 - 3.887: 81.8784% ( 530) 00:15:37.612 3.887 - 3.911: 84.7585% ( 387) 00:15:37.612 3.911 - 3.935: 86.7604% ( 269) 00:15:37.612 3.935 - 3.959: 88.2265% ( 197) 00:15:37.612 3.959 - 3.982: 89.4694% ( 167) 00:15:37.612 3.982 - 4.006: 90.5708% ( 148) 00:15:37.612 4.006 - 4.030: 91.4267% ( 115) 00:15:37.612 4.030 - 4.053: 92.2230% ( 107) 00:15:37.612 4.053 - 4.077: 93.0937% ( 117) 00:15:37.612 4.077 - 4.101: 94.0165% ( 124) 00:15:37.612 4.101 - 4.124: 94.7831% ( 103) 00:15:37.612 4.124 - 4.148: 95.1924% ( 55) 00:15:37.612 4.148 - 4.172: 95.4826% ( 39) 00:15:37.612 4.172 - 4.196: 95.7059% ( 30) 00:15:37.612 4.196 - 4.219: 95.9664% ( 35) 00:15:37.612 4.219 - 4.243: 96.1375% ( 23) 00:15:37.612 4.243 - 4.267: 96.2640% ( 17) 00:15:37.612 4.267 - 4.290: 96.3459% ( 11) 00:15:37.612 4.290 - 4.314: 96.4203% ( 10) 00:15:37.612 4.314 - 4.338: 96.5394% ( 16) 00:15:37.612 4.338 - 4.361: 96.6287% ( 12) 00:15:37.612 4.361 - 4.385: 96.7106% ( 11) 00:15:37.612 4.385 - 4.409: 96.7552% ( 6) 00:15:37.612 4.409 - 4.433: 96.7850% ( 4) 00:15:37.612 4.433 - 4.456: 96.8073% ( 3) 00:15:37.612 4.456 - 4.480: 96.8594% ( 7) 00:15:37.612 4.480 - 4.504: 96.8669% ( 1) 00:15:37.612 4.504 - 4.527: 96.8892% ( 3) 00:15:37.612 4.527 - 4.551: 96.8966% ( 1) 00:15:37.612 4.551 - 4.575: 96.9413% ( 6) 00:15:37.612 4.575 - 4.599: 96.9562% ( 2) 00:15:37.612 4.599 - 4.622: 96.9859% ( 4) 00:15:37.612 4.622 - 4.646: 97.0008% ( 2) 00:15:37.612 4.646 - 4.670: 97.0380% ( 5) 00:15:37.612 4.670 - 4.693: 97.0752% ( 5) 00:15:37.612 4.693 - 4.717: 97.1199% ( 6) 00:15:37.612 4.717 - 4.741: 97.1571% ( 5) 00:15:37.612 4.741 - 4.764: 97.2166% ( 8) 00:15:37.612 4.764 - 4.788: 97.2464% ( 4) 00:15:37.612 4.788 - 4.812: 97.3357% ( 12) 00:15:37.612 4.812 - 4.836: 97.4101% ( 10) 00:15:37.612 4.836 - 4.859: 97.4622% ( 7) 00:15:37.612 4.859 - 4.883: 97.4846% ( 3) 00:15:37.612 4.883 - 4.907: 97.5292% ( 6) 00:15:37.612 4.907 - 4.930: 97.5590% ( 4) 00:15:37.612 4.930 - 4.954: 97.5813% ( 3) 00:15:37.612 4.954 - 4.978: 97.6111% ( 4) 00:15:37.612 4.978 - 5.001: 97.6483% ( 5) 00:15:37.612 5.001 - 5.025: 97.6781% ( 4) 00:15:37.612 5.025 - 5.049: 97.7153% ( 5) 00:15:37.612 5.049 - 5.073: 97.7450% ( 4) 00:15:37.612 5.073 - 5.096: 97.7525% ( 1) 00:15:37.612 5.096 - 5.120: 97.7748% ( 3) 00:15:37.612 5.120 - 5.144: 97.8046% ( 4) 00:15:37.612 5.144 - 5.167: 97.8195% ( 2) 00:15:37.612 5.167 - 5.191: 97.8269% ( 1) 00:15:37.612 5.191 - 5.215: 97.8492% ( 3) 00:15:37.612 5.215 - 5.239: 97.8790% ( 4) 00:15:37.612 5.239 - 5.262: 97.8864% ( 1) 00:15:37.612 5.262 - 5.286: 97.9013% ( 2) 00:15:37.612 5.286 - 5.310: 97.9236% ( 3) 00:15:37.612 5.310 - 5.333: 97.9311% ( 1) 00:15:37.612 5.333 - 5.357: 97.9534% ( 3) 00:15:37.612 5.357 - 5.381: 97.9609% ( 1) 00:15:37.612 5.404 - 5.428: 97.9683% ( 1) 00:15:37.612 5.428 - 5.452: 97.9832% ( 2) 00:15:37.612 5.452 - 5.476: 97.9906% ( 1) 00:15:37.612 5.476 - 5.499: 98.0055% ( 2) 00:15:37.612 5.523 - 5.547: 98.0204% ( 2) 00:15:37.612 5.570 - 5.594: 98.0278% ( 1) 00:15:37.612 5.618 - 5.641: 98.0353% ( 1) 00:15:37.612 5.665 - 5.689: 98.0427% ( 1) 00:15:37.612 5.713 - 5.736: 98.0502% ( 1) 00:15:37.612 5.736 - 5.760: 98.0576% ( 1) 00:15:37.612 5.760 - 5.784: 98.0650% ( 1) 00:15:37.612 5.807 - 5.831: 98.0725% ( 1) 00:15:37.612 5.855 - 5.879: 98.0799% ( 1) 00:15:37.612 5.879 - 5.902: 98.0874% ( 1) 00:15:37.613 5.902 - 5.926: 98.0948% ( 1) 00:15:37.613 5.926 - 5.950: 98.1023% ( 1) 00:15:37.613 5.973 - 5.997: 98.1171% ( 2) 00:15:37.613 5.997 - 6.021: 98.1246% ( 1) 00:15:37.613 6.021 - 6.044: 98.1395% ( 2) 00:15:37.613 6.044 - 6.068: 98.1618% ( 3) 00:15:37.613 6.068 - 6.116: 98.1767% ( 2) 00:15:37.613 6.116 - 6.163: 98.1916% ( 2) 00:15:37.613 6.163 - 6.210: 98.2064% ( 2) 00:15:37.613 6.258 - 6.305: 98.2139% ( 1) 00:15:37.613 6.305 - 6.353: 98.2288% ( 2) 00:15:37.613 6.400 - 6.447: 98.2362% ( 1) 00:15:37.613 6.637 - 6.684: 98.2437% ( 1) 00:15:37.613 6.921 - 6.969: 98.2511% ( 1) 00:15:37.613 7.111 - 7.159: 98.2585% ( 1) 00:15:37.613 7.159 - 7.206: 98.2660% ( 1) 00:15:37.613 7.206 - 7.253: 98.2809% ( 2) 00:15:37.613 7.253 - 7.301: 98.2883% ( 1) 00:15:37.613 7.301 - 7.348: 98.2958% ( 1) 00:15:37.613 7.348 - 7.396: 98.3032% ( 1) 00:15:37.613 7.443 - 7.490: 98.3106% ( 1) 00:15:37.613 7.538 - 7.585: 98.3181% ( 1) 00:15:37.613 7.633 - 7.680: 98.3255% ( 1) 00:15:37.613 7.727 - 7.775: 98.3478% ( 3) 00:15:37.613 7.775 - 7.822: 98.3553% ( 1) 00:15:37.613 7.822 - 7.870: 98.3627% ( 1) 00:15:37.613 7.917 - 7.964: 98.3702% ( 1) 00:15:37.613 7.964 - 8.012: 98.3999% ( 4) 00:15:37.613 8.012 - 8.059: 98.4148% ( 2) 00:15:37.613 8.059 - 8.107: 98.4223% ( 1) 00:15:37.613 8.107 - 8.154: 98.4297% ( 1) 00:15:37.613 8.154 - 8.201: 98.4446% ( 2) 00:15:37.613 8.249 - 8.296: 98.4520% ( 1) 00:15:37.613 8.296 - 8.344: 98.4595% ( 1) 00:15:37.613 8.439 - 8.486: 98.4669% ( 1) 00:15:37.613 8.533 - 8.581: 98.4818% ( 2) 00:15:37.613 8.581 - 8.628: 98.4892% ( 1) 00:15:37.613 8.723 - 8.770: 98.4967% ( 1) 00:15:37.613 8.770 - 8.818: 98.5041% ( 1) 00:15:37.613 8.818 - 8.865: 98.5116% ( 1) 00:15:37.613 8.865 - 8.913: 98.5190% ( 1) 00:15:37.613 8.960 - 9.007: 98.5265% ( 1) 00:15:37.613 9.102 - 9.150: 98.5413% ( 2) 00:15:37.613 9.339 - 9.387: 98.5488% ( 1) 00:15:37.613 9.481 - 9.529: 98.5637% ( 2) 00:15:37.613 9.624 - 9.671: 98.5786% ( 2) 00:15:37.613 9.719 - 9.766: 98.5860% ( 1) 00:15:37.613 9.813 - 9.861: 98.5934% ( 1) 00:15:37.613 9.861 - 9.908: 98.6009% ( 1) 00:15:37.613 9.908 - 9.956: 98.6083% ( 1) 00:15:37.613 9.956 - 10.003: 98.6158% ( 1) 00:15:37.613 10.050 - 10.098: 98.6232% ( 1) 00:15:37.613 10.240 - 10.287: 98.6381% ( 2) 00:15:37.613 10.287 - 10.335: 98.6455% ( 1) 00:15:37.613 10.477 - 10.524: 98.6530% ( 1) 00:15:37.613 10.524 - 10.572: 98.6604% ( 1) 00:15:37.613 10.619 - 10.667: 98.6753% ( 2) 00:15:37.613 10.809 - 10.856: 98.6827% ( 1) 00:15:37.613 10.904 - 10.951: 98.6976% ( 2) 00:15:37.613 11.093 - 11.141: 98.7051% ( 1) 00:15:37.613 11.236 - 11.283: 98.7200% ( 2) 00:15:37.613 11.330 - 11.378: 98.7274% ( 1) 00:15:37.613 11.425 - 11.473: 98.7423% ( 2) 00:15:37.613 11.662 - 11.710: 98.7572% ( 2) 00:15:37.613 11.899 - 11.947: 98.7646% ( 1) 00:15:37.613 11.994 - 12.041: 98.7795% ( 2) 00:15:37.613 12.421 - 12.516: 98.7944% ( 2) 00:15:37.613 12.516 - 12.610: 98.8093% ( 2) 00:15:37.613 12.610 - 12.705: 98.8241% ( 2) 00:15:37.613 12.895 - 12.990: 98.8390% ( 2) 00:15:37.613 12.990 - 13.084: 98.8465% ( 1) 00:15:37.613 13.084 - 13.179: 98.8539% ( 1) 00:15:37.613 13.179 - 13.274: 98.8688% ( 2) 00:15:37.613 13.274 - 13.369: 98.8762% ( 1) 00:15:37.613 13.369 - 13.464: 98.8837% ( 1) 00:15:37.613 13.464 - 13.559: 98.8911% ( 1) 00:15:37.613 13.843 - 13.938: 98.9060% ( 2) 00:15:37.613 13.938 - 14.033: 98.9134% ( 1) 00:15:37.613 14.033 - 14.127: 98.9283% ( 2) 00:15:37.613 14.222 - 14.317: 98.9358% ( 1) 00:15:37.613 14.886 - 14.981: 98.9432% ( 1) 00:15:37.613 14.981 - 15.076: 98.9507% ( 1) 00:15:37.613 15.170 - 15.265: 98.9581% ( 1) 00:15:37.613 15.265 - 15.360: 98.9655% ( 1) 00:15:37.613 16.024 - 16.119: 98.9730% ( 1) 00:15:37.613 17.067 - 17.161: 98.9804% ( 1) 00:15:37.613 17.351 - 17.446: 98.9953% ( 2) 00:15:37.613 17.446 - 17.541: 99.0176% ( 3) 00:15:37.613 17.541 - 17.636: 99.0623% ( 6) 00:15:37.613 17.636 - 17.730: 99.0995% ( 5) 00:15:37.613 17.730 - 17.825: 99.1367% ( 5) 00:15:37.613 17.825 - 17.920: 99.1739% ( 5) 00:15:37.613 17.920 - 18.015: 99.2260% ( 7) 00:15:37.613 18.015 - 18.110: 99.3004% ( 10) 00:15:37.613 18.110 - 18.204: 99.3600% ( 8) 00:15:37.613 18.204 - 18.299: 99.4493% ( 12) 00:15:37.613 18.299 - 18.394: 99.5460% ( 13) 00:15:37.613 18.394 - 18.489: 99.5758% ( 4) 00:15:37.613 18.489 - 18.584: 99.6130% ( 5) 00:15:37.613 18.584 - 18.679: 99.6205% ( 1) 00:15:37.613 18.679 - 18.773: 99.6502% ( 4) 00:15:37.613 18.773 - 18.868: 99.6725% ( 3) 00:15:37.613 18.868 - 18.963: 99.7023% ( 4) 00:15:37.613 18.963 - 19.058: 99.7321% ( 4) 00:15:37.613 19.153 - 19.247: 99.7395% ( 1) 00:15:37.613 19.247 - 19.342: 99.7470% ( 1) 00:15:37.613 19.342 - 19.437: 99.7544% ( 1) 00:15:37.613 19.437 - 19.532: 99.7619% ( 1) 00:15:37.613 19.816 - 19.911: 99.7693% ( 1) 00:15:37.613 20.101 - 20.196: 99.7842% ( 2) 00:15:37.613 20.480 - 20.575: 99.7916% ( 1) 00:15:37.613 21.618 - 21.713: 99.7991% ( 1) 00:15:37.613 22.092 - 22.187: 99.8065% ( 1) 00:15:37.613 22.376 - 22.471: 99.8139% ( 1) 00:15:37.613 22.566 - 22.661: 99.8214% ( 1) 00:15:37.613 22.661 - 22.756: 99.8288% ( 1) 00:15:37.613 22.756 - 22.850: 99.8363% ( 1) 00:15:37.613 23.609 - 23.704: 99.8437% ( 1) 00:15:37.613 24.178 - 24.273: 99.8512% ( 1) 00:15:37.613 24.273 - 24.462: 99.8586% ( 1) 00:15:37.613 24.652 - 24.841: 99.8660% ( 1) 00:15:37.613 24.841 - 25.031: 99.8735% ( 1) 00:15:37.613 26.738 - 26.927: 99.8809% ( 1) 00:15:37.613 26.927 - 27.117: 99.8884% ( 1) 00:15:37.613 27.496 - 27.686: 99.8958% ( 1) 00:15:37.613 3980.705 - 4004.978: 99.9553% ( 8) 00:15:37.613 4004.978 - 4029.250: 100.0000% ( 6) 00:15:37.613 00:15:37.613 Complete histogram 00:15:37.613 ================== 00:15:37.613 Range in us Cumulative Count 00:15:37.613 2.039 - 2.050: 2.7908% ( 375) 00:15:37.613 2.050 - 2.062: 11.4311% ( 1161) 00:15:37.613 2.062 - 2.074: 13.6042% ( 292) 00:15:37.613 2.074 - 2.086: 35.8264% ( 2986) 00:15:37.613 2.086 - 2.098: 56.0542% ( 2718) 00:15:37.613 2.098 - 2.110: 59.8943% ( 516) 00:15:37.613 2.110 - 2.121: 64.6052% ( 633) 00:15:37.613 2.121 - 2.133: 66.6369% ( 273) 00:15:37.613 2.133 - 2.145: 67.9542% ( 177) 00:15:37.613 2.145 - 2.157: 76.7508% ( 1182) 00:15:37.613 2.157 - 2.169: 81.4393% ( 630) 00:15:37.613 2.169 - 2.181: 82.4440% ( 135) 00:15:37.613 2.181 - 2.193: 83.8059% ( 183) 00:15:37.613 2.193 - 2.204: 84.8999% ( 147) 00:15:37.613 2.204 - 2.216: 85.7037% ( 108) 00:15:37.613 2.216 - 2.228: 89.1791% ( 467) 00:15:37.613 2.228 - 2.240: 91.9699% ( 375) 00:15:37.613 2.240 - 2.252: 93.2425% ( 171) 00:15:37.613 2.252 - 2.264: 93.8974% ( 88) 00:15:37.613 2.264 - 2.276: 94.2026% ( 41) 00:15:37.613 2.276 - 2.287: 94.5151% ( 42) 00:15:37.613 2.287 - 2.299: 94.7235% ( 28) 00:15:37.613 2.299 - 2.311: 94.9617% ( 32) 00:15:37.613 2.311 - 2.323: 95.2594% ( 40) 00:15:37.613 2.323 - 2.335: 95.3933% ( 18) 00:15:37.613 2.335 - 2.347: 95.4380% ( 6) 00:15:37.613 2.347 - 2.359: 95.5124% ( 10) 00:15:37.613 2.359 - 2.370: 95.6091% ( 13) 00:15:37.613 2.370 - 2.382: 95.7431% ( 18) 00:15:37.613 2.382 - 2.394: 95.9589% ( 29) 00:15:37.613 2.394 - 2.406: 96.2194% ( 35) 00:15:37.613 2.406 - 2.418: 96.3831% ( 22) 00:15:37.613 2.418 - 2.430: 96.6585% ( 37) 00:15:37.613 2.430 - 2.441: 96.8966% ( 32) 00:15:37.613 2.441 - 2.453: 97.1050% ( 28) 00:15:37.613 2.453 - 2.465: 97.3208% ( 29) 00:15:37.613 2.465 - 2.477: 97.4622% ( 19) 00:15:37.613 2.477 - 2.489: 97.5813% ( 16) 00:15:37.613 2.489 - 2.501: 97.6855% ( 14) 00:15:37.613 2.501 - 2.513: 97.7897% ( 14) 00:15:37.613 2.513 - 2.524: 97.8269% ( 5) 00:15:37.613 2.524 - 2.536: 97.9088% ( 11) 00:15:37.613 2.536 - 2.548: 97.9757% ( 9) 00:15:37.613 2.548 - 2.560: 98.0129% ( 5) 00:15:37.613 2.560 - 2.572: 98.0278% ( 2) 00:15:37.613 2.572 - 2.584: 98.0502% ( 3) 00:15:37.613 2.584 - 2.596: 98.0650% ( 2) 00:15:37.613 2.596 - 2.607: 98.0725% ( 1) 00:15:37.613 2.607 - 2.619: 98.0799% ( 1) 00:15:37.613 2.631 - 2.643: 98.0948% ( 2) 00:15:37.613 2.643 - 2.655: 98.1023% ( 1) 00:15:37.613 2.667 - 2.679: 98.1097% ( 1) 00:15:37.613 2.679 - 2.690: 98.1171% ( 1) 00:15:37.613 2.726 - 2.738: 98.1395% ( 3) 00:15:37.613 2.750 - 2.761: 98.1469% ( 1) 00:15:37.613 2.773 - 2.785: 98.1543% ( 1) 00:15:37.613 2.785 - 2.797: 98.1618% ( 1) 00:15:37.613 2.809 - 2.821: 98.1841% ( 3) 00:15:37.613 2.833 - 2.844: 98.1916% ( 1) 00:15:37.614 2.856 - 2.868: 98.2064% ( 2) 00:15:37.614 2.939 - 2.951: 98.2139% ( 1) 00:15:37.614 3.010 - 3.022: 98.2288% ( 2) 00:15:37.614 3.034 - 3.058: 98.2437% ( 2) 00:15:37.614 3.058 - 3.081: 98.2585% ( 2) 00:15:37.614 3.081 - 3.105: 98.2809% ( 3) 00:15:37.614 3.105 - 3.129: 98.3032% ( 3) 00:15:37.614 3.129 - 3.153: 98.3255% ( 3) 00:15:37.614 3.153 - 3.176: 98.3330% ( 1) 00:15:37.614 3.176 - 3.200: 98.3553% ( 3) 00:15:37.614 3.200 - 3.224: 98.3776% ( 3) 00:15:37.614 3.224 - 3.247: 98.3851% ( 1) 00:15:37.614 3.247 - 3.271: 98.3925% ( 1) 00:15:37.614 3.271 - 3.295: 98.4223% ( 4) 00:15:37.614 3.295 - 3.319: 98.4297% ( 1) 00:15:37.614 3.319 - 3.342: 98.4372% ( 1) 00:15:37.614 3.342 - 3.366: 98.4446% ( 1) 00:15:37.614 3.366 - 3.390: 98.4595% ( 2) 00:15:37.614 3.390 - 3.413: 98.4744% ( 2) 00:15:37.614 3.413 - 3.437: 98.5190% ( 6) 00:15:37.614 3.437 - 3.461: 98.5339% ( 2) 00:15:37.614 3.484 - 3.508: 98.5488% ( 2) 00:15:37.614 3.508 - 3.532: 98.6158% ( 9) 00:15:37.614 3.532 - 3.556: 98.6232% ( 1) 00:15:37.614 3.556 - 3.579: 98.6381% ( 2) 00:15:37.614 3.603 - 3.627: 98.6455% ( 1) 00:15:37.614 3.627 - 3.650: 98.6530% ( 1) 00:15:37.614 3.650 - 3.674: 98.6679% ( 2) 00:15:37.614 3.698 - 3.721: 98.6827% ( 2) 00:15:37.614 3.721 - 3.745: 98.6976% ( 2) 00:15:37.614 3.745 - 3.769: 98.7125% ( 2) 00:15:37.614 3.793 - 3.816: 98.7200% ( 1) 00:15:37.614 3.840 - 3.864: 98.7348% ( 2) 00:15:37.614 3.887 - 3.911: 98.7497% ( 2) 00:15:37.614 3.911 - 3.935: 98.7646% ( 2) 00:15:37.614 3.959 - 3.982: 98.7720% ( 1) 00:15:37.614 4.077 - 4.101: 98.7795% ( 1) 00:15:37.614 4.148 - 4.172: 98.7869% ( 1) 00:15:37.614 5.381 - 5.404: 98.7944% ( 1) 00:15:37.614 5.499 - 5.523: 98.8018% ( 1) 00:15:37.614 5.618 - 5.641: 98.8093% ( 1) 00:15:37.614 5.736 - 5.760: 98.8241% ( 2) 00:15:37.614 5.855 - 5.879: 98.8316% ( 1) 00:15:37.614 5.997 - 6.021: 98.8390% ( 1) 00:15:37.614 6.116 - 6.163: 98.8465% ( 1) 00:15:37.614 6.163 - 6.210: 98.8614% ( 2) 00:15:37.614 6.210 - 6.258: 98.8688% ( 1) 00:15:37.614 6.258 - 6.305: 98.8762% ( 1) 00:15:37.614 6.305 - 6.353: 98.8837% ( 1) 00:15:37.614 6.542 - 6.590: 98.8911% ( 1) 00:15:37.614 6.732 - 6.779: 98.8986% ( 1) 00:15:37.614 6.827 - 6.874: 98.9060% ( 1) 00:15:37.614 7.064 - 7.111: 98.9134% ( 1) 00:15:37.614 7.301 - 7.348: 98.9209% ( 1) 00:15:37.614 7.490 - 7.538: 98.9283% ( 1) 00:15:37.614 7.633 - 7.680: 98.9358% ( 1) 00:15:37.614 8.012 - 8.059: 98.9432% ( 1) 00:15:37.614 9.719 - 9.766: 98.9507%[2024-04-17 06:41:41.814697] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:37.614 ( 1) 00:15:37.614 9.766 - 9.813: 98.9581% ( 1) 00:15:37.614 11.852 - 11.899: 98.9655% ( 1) 00:15:37.614 15.550 - 15.644: 98.9730% ( 1) 00:15:37.614 15.644 - 15.739: 98.9879% ( 2) 00:15:37.614 15.739 - 15.834: 99.0102% ( 3) 00:15:37.614 15.834 - 15.929: 99.0325% ( 3) 00:15:37.614 15.929 - 16.024: 99.0548% ( 3) 00:15:37.614 16.024 - 16.119: 99.1069% ( 7) 00:15:37.614 16.119 - 16.213: 99.1218% ( 2) 00:15:37.614 16.213 - 16.308: 99.1293% ( 1) 00:15:37.614 16.308 - 16.403: 99.1590% ( 4) 00:15:37.614 16.403 - 16.498: 99.2037% ( 6) 00:15:37.614 16.498 - 16.593: 99.2558% ( 7) 00:15:37.614 16.593 - 16.687: 99.2707% ( 2) 00:15:37.614 16.687 - 16.782: 99.3079% ( 5) 00:15:37.614 16.782 - 16.877: 99.3376% ( 4) 00:15:37.614 16.877 - 16.972: 99.3451% ( 1) 00:15:37.614 16.972 - 17.067: 99.3897% ( 6) 00:15:37.614 17.067 - 17.161: 99.3972% ( 1) 00:15:37.614 17.161 - 17.256: 99.4046% ( 1) 00:15:37.614 17.256 - 17.351: 99.4195% ( 2) 00:15:37.614 17.351 - 17.446: 99.4344% ( 2) 00:15:37.614 17.541 - 17.636: 99.4418% ( 1) 00:15:37.614 17.825 - 17.920: 99.4493% ( 1) 00:15:37.614 18.110 - 18.204: 99.4567% ( 1) 00:15:37.614 18.394 - 18.489: 99.4642% ( 1) 00:15:37.614 20.006 - 20.101: 99.4716% ( 1) 00:15:37.614 3980.705 - 4004.978: 99.8958% ( 57) 00:15:37.614 4004.978 - 4029.250: 100.0000% ( 14) 00:15:37.614 00:15:37.614 06:41:41 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:37.614 06:41:41 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:37.614 06:41:41 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:37.614 06:41:41 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:37.614 06:41:41 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:37.614 [2024-04-17 06:41:42.122825] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:15:37.614 [ 00:15:37.614 { 00:15:37.614 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:37.614 "subtype": "Discovery", 00:15:37.614 "listen_addresses": [], 00:15:37.614 "allow_any_host": true, 00:15:37.614 "hosts": [] 00:15:37.614 }, 00:15:37.614 { 00:15:37.614 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:37.614 "subtype": "NVMe", 00:15:37.614 "listen_addresses": [ 00:15:37.614 { 00:15:37.614 "transport": "VFIOUSER", 00:15:37.614 "trtype": "VFIOUSER", 00:15:37.614 "adrfam": "IPv4", 00:15:37.614 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:37.614 "trsvcid": "0" 00:15:37.614 } 00:15:37.614 ], 00:15:37.614 "allow_any_host": true, 00:15:37.614 "hosts": [], 00:15:37.614 "serial_number": "SPDK1", 00:15:37.614 "model_number": "SPDK bdev Controller", 00:15:37.614 "max_namespaces": 32, 00:15:37.614 "min_cntlid": 1, 00:15:37.614 "max_cntlid": 65519, 00:15:37.614 "namespaces": [ 00:15:37.614 { 00:15:37.614 "nsid": 1, 00:15:37.614 "bdev_name": "Malloc1", 00:15:37.614 "name": "Malloc1", 00:15:37.614 "nguid": "0F2E9C0F05D340D2BC4D208A803049EC", 00:15:37.614 "uuid": "0f2e9c0f-05d3-40d2-bc4d-208a803049ec" 00:15:37.614 } 00:15:37.614 ] 00:15:37.614 }, 00:15:37.614 { 00:15:37.614 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:37.614 "subtype": "NVMe", 00:15:37.614 "listen_addresses": [ 00:15:37.614 { 00:15:37.614 "transport": "VFIOUSER", 00:15:37.614 "trtype": "VFIOUSER", 00:15:37.614 "adrfam": "IPv4", 00:15:37.614 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:37.614 "trsvcid": "0" 00:15:37.614 } 00:15:37.614 ], 00:15:37.614 "allow_any_host": true, 00:15:37.614 "hosts": [], 00:15:37.614 "serial_number": "SPDK2", 00:15:37.614 "model_number": "SPDK bdev Controller", 00:15:37.614 "max_namespaces": 32, 00:15:37.614 "min_cntlid": 1, 00:15:37.614 "max_cntlid": 65519, 00:15:37.614 "namespaces": [ 00:15:37.614 { 00:15:37.614 "nsid": 1, 00:15:37.614 "bdev_name": "Malloc2", 00:15:37.614 "name": "Malloc2", 00:15:37.614 "nguid": "45FB7CF9F2EF425D9BA6489D2EDC7AAC", 00:15:37.614 "uuid": "45fb7cf9-f2ef-425d-9ba6-489d2edc7aac" 00:15:37.614 } 00:15:37.614 ] 00:15:37.614 } 00:15:37.614 ] 00:15:37.614 06:41:42 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:37.614 06:41:42 -- target/nvmf_vfio_user.sh@34 -- # aerpid=4155708 00:15:37.614 06:41:42 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:37.614 06:41:42 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:37.614 06:41:42 -- common/autotest_common.sh@1251 -- # local i=0 00:15:37.614 06:41:42 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:37.614 06:41:42 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:37.614 06:41:42 -- common/autotest_common.sh@1262 -- # return 0 00:15:37.614 06:41:42 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:37.614 06:41:42 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:37.614 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.873 [2024-04-17 06:41:42.303760] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:37.873 Malloc3 00:15:37.873 06:41:42 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:38.130 [2024-04-17 06:41:42.649333] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:38.130 06:41:42 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:38.130 Asynchronous Event Request test 00:15:38.130 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:38.130 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:38.130 Registering asynchronous event callbacks... 00:15:38.130 Starting namespace attribute notice tests for all controllers... 00:15:38.130 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:38.130 aer_cb - Changed Namespace 00:15:38.130 Cleaning up... 00:15:38.388 [ 00:15:38.388 { 00:15:38.388 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:38.388 "subtype": "Discovery", 00:15:38.388 "listen_addresses": [], 00:15:38.388 "allow_any_host": true, 00:15:38.388 "hosts": [] 00:15:38.388 }, 00:15:38.388 { 00:15:38.388 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:38.388 "subtype": "NVMe", 00:15:38.388 "listen_addresses": [ 00:15:38.388 { 00:15:38.388 "transport": "VFIOUSER", 00:15:38.388 "trtype": "VFIOUSER", 00:15:38.388 "adrfam": "IPv4", 00:15:38.388 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:38.388 "trsvcid": "0" 00:15:38.388 } 00:15:38.388 ], 00:15:38.388 "allow_any_host": true, 00:15:38.388 "hosts": [], 00:15:38.388 "serial_number": "SPDK1", 00:15:38.388 "model_number": "SPDK bdev Controller", 00:15:38.388 "max_namespaces": 32, 00:15:38.388 "min_cntlid": 1, 00:15:38.388 "max_cntlid": 65519, 00:15:38.388 "namespaces": [ 00:15:38.388 { 00:15:38.388 "nsid": 1, 00:15:38.388 "bdev_name": "Malloc1", 00:15:38.388 "name": "Malloc1", 00:15:38.388 "nguid": "0F2E9C0F05D340D2BC4D208A803049EC", 00:15:38.388 "uuid": "0f2e9c0f-05d3-40d2-bc4d-208a803049ec" 00:15:38.388 }, 00:15:38.388 { 00:15:38.388 "nsid": 2, 00:15:38.388 "bdev_name": "Malloc3", 00:15:38.388 "name": "Malloc3", 00:15:38.388 "nguid": "35E2AE25B2284812964B506F40AA5446", 00:15:38.388 "uuid": "35e2ae25-b228-4812-964b-506f40aa5446" 00:15:38.388 } 00:15:38.388 ] 00:15:38.388 }, 00:15:38.388 { 00:15:38.388 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:38.388 "subtype": "NVMe", 00:15:38.388 "listen_addresses": [ 00:15:38.388 { 00:15:38.388 "transport": "VFIOUSER", 00:15:38.388 "trtype": "VFIOUSER", 00:15:38.388 "adrfam": "IPv4", 00:15:38.388 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:38.388 "trsvcid": "0" 00:15:38.388 } 00:15:38.388 ], 00:15:38.388 "allow_any_host": true, 00:15:38.388 "hosts": [], 00:15:38.388 "serial_number": "SPDK2", 00:15:38.388 "model_number": "SPDK bdev Controller", 00:15:38.388 "max_namespaces": 32, 00:15:38.388 "min_cntlid": 1, 00:15:38.388 "max_cntlid": 65519, 00:15:38.388 "namespaces": [ 00:15:38.388 { 00:15:38.388 "nsid": 1, 00:15:38.388 "bdev_name": "Malloc2", 00:15:38.388 "name": "Malloc2", 00:15:38.388 "nguid": "45FB7CF9F2EF425D9BA6489D2EDC7AAC", 00:15:38.388 "uuid": "45fb7cf9-f2ef-425d-9ba6-489d2edc7aac" 00:15:38.388 } 00:15:38.388 ] 00:15:38.388 } 00:15:38.388 ] 00:15:38.388 06:41:42 -- target/nvmf_vfio_user.sh@44 -- # wait 4155708 00:15:38.388 06:41:42 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:38.388 06:41:42 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:38.388 06:41:42 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:38.388 06:41:42 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:38.388 [2024-04-17 06:41:42.914206] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:15:38.389 [2024-04-17 06:41:42.914249] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4155739 ] 00:15:38.389 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.389 [2024-04-17 06:41:42.950244] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:38.389 [2024-04-17 06:41:42.952599] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:38.389 [2024-04-17 06:41:42.952629] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2c6f2d6000 00:15:38.389 [2024-04-17 06:41:42.953599] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:38.389 [2024-04-17 06:41:42.954603] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:38.389 [2024-04-17 06:41:42.955614] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:38.389 [2024-04-17 06:41:42.956620] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:38.389 [2024-04-17 06:41:42.957628] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:38.389 [2024-04-17 06:41:42.958636] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:38.389 [2024-04-17 06:41:42.959644] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:38.389 [2024-04-17 06:41:42.960648] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:38.389 [2024-04-17 06:41:42.961658] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:38.389 [2024-04-17 06:41:42.961679] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2c6e088000 00:15:38.389 [2024-04-17 06:41:42.962816] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:38.389 [2024-04-17 06:41:42.980562] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:38.389 [2024-04-17 06:41:42.980596] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:38.389 [2024-04-17 06:41:42.982670] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:38.389 [2024-04-17 06:41:42.982721] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:38.389 [2024-04-17 06:41:42.982806] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:38.389 [2024-04-17 06:41:42.982831] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:38.389 [2024-04-17 06:41:42.982841] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:38.389 [2024-04-17 06:41:42.983679] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:38.389 [2024-04-17 06:41:42.983699] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:38.389 [2024-04-17 06:41:42.983711] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:38.389 [2024-04-17 06:41:42.984685] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:38.389 [2024-04-17 06:41:42.984705] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:38.389 [2024-04-17 06:41:42.984718] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:38.389 [2024-04-17 06:41:42.985695] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:38.389 [2024-04-17 06:41:42.985714] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:38.389 [2024-04-17 06:41:42.986701] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:38.389 [2024-04-17 06:41:42.986719] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:38.389 [2024-04-17 06:41:42.986728] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:38.389 [2024-04-17 06:41:42.986743] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:38.389 [2024-04-17 06:41:42.986853] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:38.389 [2024-04-17 06:41:42.986861] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:38.389 [2024-04-17 06:41:42.986869] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:38.389 [2024-04-17 06:41:42.987709] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:38.389 [2024-04-17 06:41:42.988716] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:38.389 [2024-04-17 06:41:42.989726] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:38.389 [2024-04-17 06:41:42.990736] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:38.389 [2024-04-17 06:41:42.990818] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:38.389 [2024-04-17 06:41:42.991760] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:38.389 [2024-04-17 06:41:42.991780] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:38.389 [2024-04-17 06:41:42.991790] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:38.389 [2024-04-17 06:41:42.991813] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:38.389 [2024-04-17 06:41:42.991827] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:38.389 [2024-04-17 06:41:42.991848] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:38.389 [2024-04-17 06:41:42.991858] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:38.389 [2024-04-17 06:41:42.991890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:38.648 [2024-04-17 06:41:42.998195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:38.648 [2024-04-17 06:41:42.998219] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:38.648 [2024-04-17 06:41:42.998238] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:38.648 [2024-04-17 06:41:42.998246] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:38.648 [2024-04-17 06:41:42.998253] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:38.648 [2024-04-17 06:41:42.998261] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:38.648 [2024-04-17 06:41:42.998269] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:38.648 [2024-04-17 06:41:42.998277] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:38.648 [2024-04-17 06:41:42.998293] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:38.648 [2024-04-17 06:41:42.998310] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:38.648 [2024-04-17 06:41:43.006186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:38.648 [2024-04-17 06:41:43.006215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.648 [2024-04-17 06:41:43.006235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.648 [2024-04-17 06:41:43.006247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.648 [2024-04-17 06:41:43.006258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.648 [2024-04-17 06:41:43.006266] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:38.648 [2024-04-17 06:41:43.006281] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:38.648 [2024-04-17 06:41:43.006295] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:38.648 [2024-04-17 06:41:43.014185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:38.648 [2024-04-17 06:41:43.014202] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:38.648 [2024-04-17 06:41:43.014211] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:38.648 [2024-04-17 06:41:43.014247] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:38.648 [2024-04-17 06:41:43.014258] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:38.648 [2024-04-17 06:41:43.014272] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:38.648 [2024-04-17 06:41:43.022186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:38.648 [2024-04-17 06:41:43.022260] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:38.648 [2024-04-17 06:41:43.022276] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:38.648 [2024-04-17 06:41:43.022290] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:38.649 [2024-04-17 06:41:43.022298] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:38.649 [2024-04-17 06:41:43.022308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:38.649 [2024-04-17 06:41:43.030185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:38.649 [2024-04-17 06:41:43.030218] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:38.649 [2024-04-17 06:41:43.030234] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:38.649 [2024-04-17 06:41:43.030252] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:38.649 [2024-04-17 06:41:43.030266] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:38.649 [2024-04-17 06:41:43.030274] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:38.649 [2024-04-17 06:41:43.030284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:38.649 [2024-04-17 06:41:43.037224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:38.649 [2024-04-17 06:41:43.037253] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:38.649 [2024-04-17 06:41:43.037269] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:38.649 [2024-04-17 06:41:43.037282] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:38.649 [2024-04-17 06:41:43.037290] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:38.649 [2024-04-17 06:41:43.037300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:38.649 [2024-04-17 06:41:43.046188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:38.649 [2024-04-17 06:41:43.046209] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:38.649 [2024-04-17 06:41:43.046222] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:38.649 [2024-04-17 06:41:43.046237] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:38.649 [2024-04-17 06:41:43.046247] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:38.649 [2024-04-17 06:41:43.046255] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:38.649 [2024-04-17 06:41:43.046264] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:38.649 [2024-04-17 06:41:43.046271] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:38.649 [2024-04-17 06:41:43.046279] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:38.649 [2024-04-17 06:41:43.046305] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:38.649 [2024-04-17 06:41:43.054187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:38.649 [2024-04-17 06:41:43.054213] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:38.649 [2024-04-17 06:41:43.062199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:38.649 [2024-04-17 06:41:43.062224] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:38.649 [2024-04-17 06:41:43.070186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:38.649 [2024-04-17 06:41:43.070216] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:38.649 [2024-04-17 06:41:43.078190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:38.649 [2024-04-17 06:41:43.078216] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:38.649 [2024-04-17 06:41:43.078225] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:38.649 [2024-04-17 06:41:43.078231] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:38.649 [2024-04-17 06:41:43.078237] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:38.649 [2024-04-17 06:41:43.078247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:38.649 [2024-04-17 06:41:43.078258] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:38.649 [2024-04-17 06:41:43.078266] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:38.649 [2024-04-17 06:41:43.078275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:38.649 [2024-04-17 06:41:43.078286] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:38.649 [2024-04-17 06:41:43.078294] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:38.649 [2024-04-17 06:41:43.078303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:38.649 [2024-04-17 06:41:43.078314] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:38.649 [2024-04-17 06:41:43.078322] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:38.649 [2024-04-17 06:41:43.078331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:38.649 [2024-04-17 06:41:43.086186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:38.649 [2024-04-17 06:41:43.086216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:38.649 [2024-04-17 06:41:43.086232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:38.649 [2024-04-17 06:41:43.086244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:38.649 ===================================================== 00:15:38.649 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:38.649 ===================================================== 00:15:38.649 Controller Capabilities/Features 00:15:38.649 ================================ 00:15:38.649 Vendor ID: 4e58 00:15:38.649 Subsystem Vendor ID: 4e58 00:15:38.649 Serial Number: SPDK2 00:15:38.649 Model Number: SPDK bdev Controller 00:15:38.649 Firmware Version: 24.05 00:15:38.649 Recommended Arb Burst: 6 00:15:38.649 IEEE OUI Identifier: 8d 6b 50 00:15:38.649 Multi-path I/O 00:15:38.649 May have multiple subsystem ports: Yes 00:15:38.649 May have multiple controllers: Yes 00:15:38.649 Associated with SR-IOV VF: No 00:15:38.649 Max Data Transfer Size: 131072 00:15:38.649 Max Number of Namespaces: 32 00:15:38.649 Max Number of I/O Queues: 127 00:15:38.649 NVMe Specification Version (VS): 1.3 00:15:38.649 NVMe Specification Version (Identify): 1.3 00:15:38.649 Maximum Queue Entries: 256 00:15:38.649 Contiguous Queues Required: Yes 00:15:38.649 Arbitration Mechanisms Supported 00:15:38.649 Weighted Round Robin: Not Supported 00:15:38.649 Vendor Specific: Not Supported 00:15:38.649 Reset Timeout: 15000 ms 00:15:38.649 Doorbell Stride: 4 bytes 00:15:38.649 NVM Subsystem Reset: Not Supported 00:15:38.649 Command Sets Supported 00:15:38.649 NVM Command Set: Supported 00:15:38.649 Boot Partition: Not Supported 00:15:38.649 Memory Page Size Minimum: 4096 bytes 00:15:38.649 Memory Page Size Maximum: 4096 bytes 00:15:38.649 Persistent Memory Region: Not Supported 00:15:38.649 Optional Asynchronous Events Supported 00:15:38.649 Namespace Attribute Notices: Supported 00:15:38.649 Firmware Activation Notices: Not Supported 00:15:38.649 ANA Change Notices: Not Supported 00:15:38.649 PLE Aggregate Log Change Notices: Not Supported 00:15:38.649 LBA Status Info Alert Notices: Not Supported 00:15:38.649 EGE Aggregate Log Change Notices: Not Supported 00:15:38.649 Normal NVM Subsystem Shutdown event: Not Supported 00:15:38.649 Zone Descriptor Change Notices: Not Supported 00:15:38.649 Discovery Log Change Notices: Not Supported 00:15:38.649 Controller Attributes 00:15:38.649 128-bit Host Identifier: Supported 00:15:38.649 Non-Operational Permissive Mode: Not Supported 00:15:38.649 NVM Sets: Not Supported 00:15:38.649 Read Recovery Levels: Not Supported 00:15:38.649 Endurance Groups: Not Supported 00:15:38.649 Predictable Latency Mode: Not Supported 00:15:38.649 Traffic Based Keep ALive: Not Supported 00:15:38.649 Namespace Granularity: Not Supported 00:15:38.649 SQ Associations: Not Supported 00:15:38.649 UUID List: Not Supported 00:15:38.649 Multi-Domain Subsystem: Not Supported 00:15:38.649 Fixed Capacity Management: Not Supported 00:15:38.649 Variable Capacity Management: Not Supported 00:15:38.649 Delete Endurance Group: Not Supported 00:15:38.649 Delete NVM Set: Not Supported 00:15:38.649 Extended LBA Formats Supported: Not Supported 00:15:38.649 Flexible Data Placement Supported: Not Supported 00:15:38.649 00:15:38.649 Controller Memory Buffer Support 00:15:38.649 ================================ 00:15:38.649 Supported: No 00:15:38.649 00:15:38.649 Persistent Memory Region Support 00:15:38.649 ================================ 00:15:38.649 Supported: No 00:15:38.649 00:15:38.649 Admin Command Set Attributes 00:15:38.649 ============================ 00:15:38.649 Security Send/Receive: Not Supported 00:15:38.649 Format NVM: Not Supported 00:15:38.649 Firmware Activate/Download: Not Supported 00:15:38.650 Namespace Management: Not Supported 00:15:38.650 Device Self-Test: Not Supported 00:15:38.650 Directives: Not Supported 00:15:38.650 NVMe-MI: Not Supported 00:15:38.650 Virtualization Management: Not Supported 00:15:38.650 Doorbell Buffer Config: Not Supported 00:15:38.650 Get LBA Status Capability: Not Supported 00:15:38.650 Command & Feature Lockdown Capability: Not Supported 00:15:38.650 Abort Command Limit: 4 00:15:38.650 Async Event Request Limit: 4 00:15:38.650 Number of Firmware Slots: N/A 00:15:38.650 Firmware Slot 1 Read-Only: N/A 00:15:38.650 Firmware Activation Without Reset: N/A 00:15:38.650 Multiple Update Detection Support: N/A 00:15:38.650 Firmware Update Granularity: No Information Provided 00:15:38.650 Per-Namespace SMART Log: No 00:15:38.650 Asymmetric Namespace Access Log Page: Not Supported 00:15:38.650 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:38.650 Command Effects Log Page: Supported 00:15:38.650 Get Log Page Extended Data: Supported 00:15:38.650 Telemetry Log Pages: Not Supported 00:15:38.650 Persistent Event Log Pages: Not Supported 00:15:38.650 Supported Log Pages Log Page: May Support 00:15:38.650 Commands Supported & Effects Log Page: Not Supported 00:15:38.650 Feature Identifiers & Effects Log Page:May Support 00:15:38.650 NVMe-MI Commands & Effects Log Page: May Support 00:15:38.650 Data Area 4 for Telemetry Log: Not Supported 00:15:38.650 Error Log Page Entries Supported: 128 00:15:38.650 Keep Alive: Supported 00:15:38.650 Keep Alive Granularity: 10000 ms 00:15:38.650 00:15:38.650 NVM Command Set Attributes 00:15:38.650 ========================== 00:15:38.650 Submission Queue Entry Size 00:15:38.650 Max: 64 00:15:38.650 Min: 64 00:15:38.650 Completion Queue Entry Size 00:15:38.650 Max: 16 00:15:38.650 Min: 16 00:15:38.650 Number of Namespaces: 32 00:15:38.650 Compare Command: Supported 00:15:38.650 Write Uncorrectable Command: Not Supported 00:15:38.650 Dataset Management Command: Supported 00:15:38.650 Write Zeroes Command: Supported 00:15:38.650 Set Features Save Field: Not Supported 00:15:38.650 Reservations: Not Supported 00:15:38.650 Timestamp: Not Supported 00:15:38.650 Copy: Supported 00:15:38.650 Volatile Write Cache: Present 00:15:38.650 Atomic Write Unit (Normal): 1 00:15:38.650 Atomic Write Unit (PFail): 1 00:15:38.650 Atomic Compare & Write Unit: 1 00:15:38.650 Fused Compare & Write: Supported 00:15:38.650 Scatter-Gather List 00:15:38.650 SGL Command Set: Supported (Dword aligned) 00:15:38.650 SGL Keyed: Not Supported 00:15:38.650 SGL Bit Bucket Descriptor: Not Supported 00:15:38.650 SGL Metadata Pointer: Not Supported 00:15:38.650 Oversized SGL: Not Supported 00:15:38.650 SGL Metadata Address: Not Supported 00:15:38.650 SGL Offset: Not Supported 00:15:38.650 Transport SGL Data Block: Not Supported 00:15:38.650 Replay Protected Memory Block: Not Supported 00:15:38.650 00:15:38.650 Firmware Slot Information 00:15:38.650 ========================= 00:15:38.650 Active slot: 1 00:15:38.650 Slot 1 Firmware Revision: 24.05 00:15:38.650 00:15:38.650 00:15:38.650 Commands Supported and Effects 00:15:38.650 ============================== 00:15:38.650 Admin Commands 00:15:38.650 -------------- 00:15:38.650 Get Log Page (02h): Supported 00:15:38.650 Identify (06h): Supported 00:15:38.650 Abort (08h): Supported 00:15:38.650 Set Features (09h): Supported 00:15:38.650 Get Features (0Ah): Supported 00:15:38.650 Asynchronous Event Request (0Ch): Supported 00:15:38.650 Keep Alive (18h): Supported 00:15:38.650 I/O Commands 00:15:38.650 ------------ 00:15:38.650 Flush (00h): Supported LBA-Change 00:15:38.650 Write (01h): Supported LBA-Change 00:15:38.650 Read (02h): Supported 00:15:38.650 Compare (05h): Supported 00:15:38.650 Write Zeroes (08h): Supported LBA-Change 00:15:38.650 Dataset Management (09h): Supported LBA-Change 00:15:38.650 Copy (19h): Supported LBA-Change 00:15:38.650 Unknown (79h): Supported LBA-Change 00:15:38.650 Unknown (7Ah): Supported 00:15:38.650 00:15:38.650 Error Log 00:15:38.650 ========= 00:15:38.650 00:15:38.650 Arbitration 00:15:38.650 =========== 00:15:38.650 Arbitration Burst: 1 00:15:38.650 00:15:38.650 Power Management 00:15:38.650 ================ 00:15:38.650 Number of Power States: 1 00:15:38.650 Current Power State: Power State #0 00:15:38.650 Power State #0: 00:15:38.650 Max Power: 0.00 W 00:15:38.650 Non-Operational State: Operational 00:15:38.650 Entry Latency: Not Reported 00:15:38.650 Exit Latency: Not Reported 00:15:38.650 Relative Read Throughput: 0 00:15:38.650 Relative Read Latency: 0 00:15:38.650 Relative Write Throughput: 0 00:15:38.650 Relative Write Latency: 0 00:15:38.650 Idle Power: Not Reported 00:15:38.650 Active Power: Not Reported 00:15:38.650 Non-Operational Permissive Mode: Not Supported 00:15:38.650 00:15:38.650 Health Information 00:15:38.650 ================== 00:15:38.650 Critical Warnings: 00:15:38.650 Available Spare Space: OK 00:15:38.650 Temperature: OK 00:15:38.650 Device Reliability: OK 00:15:38.650 Read Only: No 00:15:38.650 Volatile Memory Backup: OK 00:15:38.650 Current Temperature: 0 Kelvin (-2[2024-04-17 06:41:43.086370] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:38.650 [2024-04-17 06:41:43.094188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:38.650 [2024-04-17 06:41:43.094234] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:38.650 [2024-04-17 06:41:43.094251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.650 [2024-04-17 06:41:43.094262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.650 [2024-04-17 06:41:43.094271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.650 [2024-04-17 06:41:43.094281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.650 [2024-04-17 06:41:43.094362] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:38.650 [2024-04-17 06:41:43.094386] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:38.650 [2024-04-17 06:41:43.095359] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:38.650 [2024-04-17 06:41:43.095427] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:38.650 [2024-04-17 06:41:43.095441] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:38.650 [2024-04-17 06:41:43.096374] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:38.650 [2024-04-17 06:41:43.096398] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:38.650 [2024-04-17 06:41:43.096450] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:38.650 [2024-04-17 06:41:43.099187] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:38.650 73 Celsius) 00:15:38.650 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:38.650 Available Spare: 0% 00:15:38.650 Available Spare Threshold: 0% 00:15:38.650 Life Percentage Used: 0% 00:15:38.650 Data Units Read: 0 00:15:38.650 Data Units Written: 0 00:15:38.650 Host Read Commands: 0 00:15:38.650 Host Write Commands: 0 00:15:38.650 Controller Busy Time: 0 minutes 00:15:38.650 Power Cycles: 0 00:15:38.650 Power On Hours: 0 hours 00:15:38.650 Unsafe Shutdowns: 0 00:15:38.650 Unrecoverable Media Errors: 0 00:15:38.650 Lifetime Error Log Entries: 0 00:15:38.650 Warning Temperature Time: 0 minutes 00:15:38.650 Critical Temperature Time: 0 minutes 00:15:38.650 00:15:38.650 Number of Queues 00:15:38.650 ================ 00:15:38.650 Number of I/O Submission Queues: 127 00:15:38.650 Number of I/O Completion Queues: 127 00:15:38.650 00:15:38.650 Active Namespaces 00:15:38.650 ================= 00:15:38.650 Namespace ID:1 00:15:38.650 Error Recovery Timeout: Unlimited 00:15:38.650 Command Set Identifier: NVM (00h) 00:15:38.650 Deallocate: Supported 00:15:38.650 Deallocated/Unwritten Error: Not Supported 00:15:38.650 Deallocated Read Value: Unknown 00:15:38.650 Deallocate in Write Zeroes: Not Supported 00:15:38.650 Deallocated Guard Field: 0xFFFF 00:15:38.650 Flush: Supported 00:15:38.650 Reservation: Supported 00:15:38.650 Namespace Sharing Capabilities: Multiple Controllers 00:15:38.650 Size (in LBAs): 131072 (0GiB) 00:15:38.650 Capacity (in LBAs): 131072 (0GiB) 00:15:38.650 Utilization (in LBAs): 131072 (0GiB) 00:15:38.650 NGUID: 45FB7CF9F2EF425D9BA6489D2EDC7AAC 00:15:38.650 UUID: 45fb7cf9-f2ef-425d-9ba6-489d2edc7aac 00:15:38.650 Thin Provisioning: Not Supported 00:15:38.650 Per-NS Atomic Units: Yes 00:15:38.650 Atomic Boundary Size (Normal): 0 00:15:38.650 Atomic Boundary Size (PFail): 0 00:15:38.651 Atomic Boundary Offset: 0 00:15:38.651 Maximum Single Source Range Length: 65535 00:15:38.651 Maximum Copy Length: 65535 00:15:38.651 Maximum Source Range Count: 1 00:15:38.651 NGUID/EUI64 Never Reused: No 00:15:38.651 Namespace Write Protected: No 00:15:38.651 Number of LBA Formats: 1 00:15:38.651 Current LBA Format: LBA Format #00 00:15:38.651 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:38.651 00:15:38.651 06:41:43 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:38.651 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.909 [2024-04-17 06:41:43.326931] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:44.184 [2024-04-17 06:41:48.433540] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:44.184 Initializing NVMe Controllers 00:15:44.184 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:44.184 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:44.184 Initialization complete. Launching workers. 00:15:44.184 ======================================================== 00:15:44.184 Latency(us) 00:15:44.184 Device Information : IOPS MiB/s Average min max 00:15:44.184 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33460.13 130.70 3824.64 1202.94 10622.60 00:15:44.184 ======================================================== 00:15:44.184 Total : 33460.13 130.70 3824.64 1202.94 10622.60 00:15:44.184 00:15:44.184 06:41:48 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:44.184 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.184 [2024-04-17 06:41:48.665224] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:49.502 [2024-04-17 06:41:53.687488] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:49.502 Initializing NVMe Controllers 00:15:49.502 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:49.502 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:49.502 Initialization complete. Launching workers. 00:15:49.502 ======================================================== 00:15:49.502 Latency(us) 00:15:49.502 Device Information : IOPS MiB/s Average min max 00:15:49.502 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31436.20 122.80 4073.62 1214.47 10265.94 00:15:49.502 ======================================================== 00:15:49.502 Total : 31436.20 122.80 4073.62 1214.47 10265.94 00:15:49.502 00:15:49.502 06:41:53 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:49.502 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.502 [2024-04-17 06:41:53.893370] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:54.764 [2024-04-17 06:41:59.028317] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:54.764 Initializing NVMe Controllers 00:15:54.764 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:54.764 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:54.764 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:54.764 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:54.764 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:54.764 Initialization complete. Launching workers. 00:15:54.764 Starting thread on core 2 00:15:54.764 Starting thread on core 3 00:15:54.764 Starting thread on core 1 00:15:54.764 06:41:59 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:54.764 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.764 [2024-04-17 06:41:59.337654] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:58.049 [2024-04-17 06:42:02.420214] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:58.049 Initializing NVMe Controllers 00:15:58.049 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:58.049 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:58.049 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:58.049 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:58.049 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:58.049 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:58.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:58.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:58.049 Initialization complete. Launching workers. 00:15:58.049 Starting thread on core 1 with urgent priority queue 00:15:58.049 Starting thread on core 2 with urgent priority queue 00:15:58.049 Starting thread on core 3 with urgent priority queue 00:15:58.049 Starting thread on core 0 with urgent priority queue 00:15:58.049 SPDK bdev Controller (SPDK2 ) core 0: 5996.33 IO/s 16.68 secs/100000 ios 00:15:58.050 SPDK bdev Controller (SPDK2 ) core 1: 5823.67 IO/s 17.17 secs/100000 ios 00:15:58.050 SPDK bdev Controller (SPDK2 ) core 2: 6445.33 IO/s 15.52 secs/100000 ios 00:15:58.050 SPDK bdev Controller (SPDK2 ) core 3: 5500.00 IO/s 18.18 secs/100000 ios 00:15:58.050 ======================================================== 00:15:58.050 00:15:58.050 06:42:02 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:58.050 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.316 [2024-04-17 06:42:02.713590] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:58.316 [2024-04-17 06:42:02.723677] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:58.316 Initializing NVMe Controllers 00:15:58.316 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:58.316 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:58.316 Namespace ID: 1 size: 0GB 00:15:58.316 Initialization complete. 00:15:58.316 INFO: using host memory buffer for IO 00:15:58.316 Hello world! 00:15:58.316 06:42:02 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:58.316 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.573 [2024-04-17 06:42:03.010108] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:59.505 Initializing NVMe Controllers 00:15:59.505 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:59.505 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:59.505 Initialization complete. Launching workers. 00:15:59.505 submit (in ns) avg, min, max = 8404.8, 3477.8, 4018785.6 00:15:59.505 complete (in ns) avg, min, max = 25464.1, 2071.1, 4027576.7 00:15:59.505 00:15:59.505 Submit histogram 00:15:59.505 ================ 00:15:59.505 Range in us Cumulative Count 00:15:59.505 3.461 - 3.484: 0.0372% ( 5) 00:15:59.505 3.484 - 3.508: 0.3197% ( 38) 00:15:59.505 3.508 - 3.532: 1.4794% ( 156) 00:15:59.505 3.532 - 3.556: 4.1187% ( 355) 00:15:59.505 3.556 - 3.579: 10.1777% ( 815) 00:15:59.505 3.579 - 3.603: 18.0804% ( 1063) 00:15:59.505 3.603 - 3.627: 27.8938% ( 1320) 00:15:59.505 3.627 - 3.650: 37.8262% ( 1336) 00:15:59.505 3.650 - 3.674: 46.0784% ( 1110) 00:15:59.505 3.674 - 3.698: 52.4348% ( 855) 00:15:59.505 3.698 - 3.721: 57.4084% ( 669) 00:15:59.505 3.721 - 3.745: 61.7278% ( 581) 00:15:59.505 3.745 - 3.769: 64.9245% ( 430) 00:15:59.505 3.769 - 3.793: 68.2105% ( 442) 00:15:59.505 3.793 - 3.816: 71.1992% ( 402) 00:15:59.505 3.816 - 3.840: 74.7305% ( 475) 00:15:59.505 3.840 - 3.864: 78.5890% ( 519) 00:15:59.505 3.864 - 3.887: 82.2615% ( 494) 00:15:59.505 3.887 - 3.911: 85.0346% ( 373) 00:15:59.505 3.911 - 3.935: 87.2575% ( 299) 00:15:59.505 3.935 - 3.959: 88.9153% ( 223) 00:15:59.505 3.959 - 3.982: 90.3353% ( 191) 00:15:59.505 3.982 - 4.006: 91.3389% ( 135) 00:15:59.505 4.006 - 4.030: 92.2980% ( 129) 00:15:59.505 4.030 - 4.053: 93.0488% ( 101) 00:15:59.505 4.053 - 4.077: 93.7625% ( 96) 00:15:59.505 4.077 - 4.101: 94.4688% ( 95) 00:15:59.505 4.101 - 4.124: 95.0338% ( 76) 00:15:59.505 4.124 - 4.148: 95.3981% ( 49) 00:15:59.505 4.148 - 4.172: 95.7550% ( 48) 00:15:59.505 4.172 - 4.196: 95.9483% ( 26) 00:15:59.505 4.196 - 4.219: 96.1341% ( 25) 00:15:59.505 4.219 - 4.243: 96.2382% ( 14) 00:15:59.505 4.243 - 4.267: 96.3200% ( 11) 00:15:59.505 4.267 - 4.290: 96.4092% ( 12) 00:15:59.505 4.290 - 4.314: 96.5281% ( 16) 00:15:59.505 4.314 - 4.338: 96.6248% ( 13) 00:15:59.505 4.338 - 4.361: 96.7289% ( 14) 00:15:59.505 4.361 - 4.385: 96.8032% ( 10) 00:15:59.505 4.385 - 4.409: 96.8329% ( 4) 00:15:59.505 4.409 - 4.433: 96.8999% ( 9) 00:15:59.505 4.433 - 4.456: 96.9370% ( 5) 00:15:59.505 4.456 - 4.480: 96.9519% ( 2) 00:15:59.505 4.480 - 4.504: 96.9742% ( 3) 00:15:59.505 4.527 - 4.551: 96.9816% ( 1) 00:15:59.505 4.551 - 4.575: 96.9891% ( 1) 00:15:59.505 4.622 - 4.646: 96.9965% ( 1) 00:15:59.505 4.646 - 4.670: 97.0188% ( 3) 00:15:59.505 4.670 - 4.693: 97.0560% ( 5) 00:15:59.505 4.693 - 4.717: 97.0857% ( 4) 00:15:59.505 4.717 - 4.741: 97.1155% ( 4) 00:15:59.505 4.741 - 4.764: 97.1452% ( 4) 00:15:59.505 4.764 - 4.788: 97.1824% ( 5) 00:15:59.505 4.788 - 4.812: 97.2493% ( 9) 00:15:59.506 4.812 - 4.836: 97.2939% ( 6) 00:15:59.506 4.836 - 4.859: 97.3385% ( 6) 00:15:59.506 4.859 - 4.883: 97.4054% ( 9) 00:15:59.506 4.883 - 4.907: 97.4500% ( 6) 00:15:59.506 4.907 - 4.930: 97.5318% ( 11) 00:15:59.506 4.930 - 4.954: 97.6061% ( 10) 00:15:59.506 4.954 - 4.978: 97.6582% ( 7) 00:15:59.506 4.978 - 5.001: 97.6805% ( 3) 00:15:59.506 5.001 - 5.025: 97.6953% ( 2) 00:15:59.506 5.025 - 5.049: 97.7474% ( 7) 00:15:59.506 5.049 - 5.073: 97.7994% ( 7) 00:15:59.506 5.073 - 5.096: 97.8292% ( 4) 00:15:59.506 5.096 - 5.120: 97.8515% ( 3) 00:15:59.506 5.120 - 5.144: 97.8886% ( 5) 00:15:59.506 5.144 - 5.167: 97.9184% ( 4) 00:15:59.506 5.167 - 5.191: 97.9481% ( 4) 00:15:59.506 5.191 - 5.215: 97.9778% ( 4) 00:15:59.506 5.239 - 5.262: 98.0001% ( 3) 00:15:59.506 5.262 - 5.286: 98.0150% ( 2) 00:15:59.506 5.286 - 5.310: 98.0225% ( 1) 00:15:59.506 5.310 - 5.333: 98.0448% ( 3) 00:15:59.506 5.333 - 5.357: 98.0522% ( 1) 00:15:59.506 5.357 - 5.381: 98.0596% ( 1) 00:15:59.506 5.452 - 5.476: 98.0671% ( 1) 00:15:59.506 5.476 - 5.499: 98.0745% ( 1) 00:15:59.506 5.499 - 5.523: 98.0819% ( 1) 00:15:59.506 5.523 - 5.547: 98.0894% ( 1) 00:15:59.506 5.547 - 5.570: 98.0968% ( 1) 00:15:59.506 5.570 - 5.594: 98.1042% ( 1) 00:15:59.506 5.594 - 5.618: 98.1191% ( 2) 00:15:59.506 5.618 - 5.641: 98.1340% ( 2) 00:15:59.506 5.641 - 5.665: 98.1637% ( 4) 00:15:59.506 5.665 - 5.689: 98.1786% ( 2) 00:15:59.506 5.713 - 5.736: 98.1934% ( 2) 00:15:59.506 5.760 - 5.784: 98.2009% ( 1) 00:15:59.506 5.807 - 5.831: 98.2083% ( 1) 00:15:59.506 5.855 - 5.879: 98.2157% ( 1) 00:15:59.506 5.902 - 5.926: 98.2232% ( 1) 00:15:59.506 5.926 - 5.950: 98.2380% ( 2) 00:15:59.506 5.950 - 5.973: 98.2455% ( 1) 00:15:59.506 6.044 - 6.068: 98.2529% ( 1) 00:15:59.506 6.068 - 6.116: 98.2678% ( 2) 00:15:59.506 6.210 - 6.258: 98.2752% ( 1) 00:15:59.506 6.258 - 6.305: 98.2827% ( 1) 00:15:59.506 6.400 - 6.447: 98.2901% ( 1) 00:15:59.506 6.637 - 6.684: 98.2975% ( 1) 00:15:59.506 6.874 - 6.921: 98.3050% ( 1) 00:15:59.506 7.206 - 7.253: 98.3124% ( 1) 00:15:59.506 7.490 - 7.538: 98.3198% ( 1) 00:15:59.506 7.585 - 7.633: 98.3273% ( 1) 00:15:59.506 7.633 - 7.680: 98.3347% ( 1) 00:15:59.506 7.680 - 7.727: 98.3496% ( 2) 00:15:59.506 7.822 - 7.870: 98.3570% ( 1) 00:15:59.506 7.870 - 7.917: 98.3644% ( 1) 00:15:59.506 7.917 - 7.964: 98.3719% ( 1) 00:15:59.506 8.012 - 8.059: 98.3793% ( 1) 00:15:59.506 8.059 - 8.107: 98.3942% ( 2) 00:15:59.506 8.107 - 8.154: 98.4090% ( 2) 00:15:59.506 8.201 - 8.249: 98.4165% ( 1) 00:15:59.506 8.249 - 8.296: 98.4313% ( 2) 00:15:59.506 8.296 - 8.344: 98.4388% ( 1) 00:15:59.506 8.391 - 8.439: 98.4462% ( 1) 00:15:59.506 8.533 - 8.581: 98.4536% ( 1) 00:15:59.506 8.581 - 8.628: 98.4759% ( 3) 00:15:59.506 8.628 - 8.676: 98.4908% ( 2) 00:15:59.506 8.676 - 8.723: 98.5057% ( 2) 00:15:59.506 8.723 - 8.770: 98.5131% ( 1) 00:15:59.506 8.865 - 8.913: 98.5354% ( 3) 00:15:59.506 8.913 - 8.960: 98.5429% ( 1) 00:15:59.506 8.960 - 9.007: 98.5503% ( 1) 00:15:59.506 9.007 - 9.055: 98.5652% ( 2) 00:15:59.506 9.197 - 9.244: 98.5800% ( 2) 00:15:59.506 9.292 - 9.339: 98.5875% ( 1) 00:15:59.506 9.434 - 9.481: 98.6172% ( 4) 00:15:59.506 9.529 - 9.576: 98.6469% ( 4) 00:15:59.506 9.624 - 9.671: 98.6618% ( 2) 00:15:59.506 9.766 - 9.813: 98.6692% ( 1) 00:15:59.506 9.908 - 9.956: 98.6767% ( 1) 00:15:59.506 10.098 - 10.145: 98.6990% ( 3) 00:15:59.506 10.335 - 10.382: 98.7139% ( 2) 00:15:59.506 10.430 - 10.477: 98.7213% ( 1) 00:15:59.506 10.524 - 10.572: 98.7287% ( 1) 00:15:59.506 10.714 - 10.761: 98.7362% ( 1) 00:15:59.506 10.809 - 10.856: 98.7585% ( 3) 00:15:59.506 11.283 - 11.330: 98.7659% ( 1) 00:15:59.506 11.378 - 11.425: 98.7733% ( 1) 00:15:59.506 11.473 - 11.520: 98.7808% ( 1) 00:15:59.506 11.662 - 11.710: 98.7956% ( 2) 00:15:59.506 11.710 - 11.757: 98.8031% ( 1) 00:15:59.506 11.804 - 11.852: 98.8179% ( 2) 00:15:59.506 11.994 - 12.041: 98.8254% ( 1) 00:15:59.506 12.041 - 12.089: 98.8328% ( 1) 00:15:59.506 12.136 - 12.231: 98.8402% ( 1) 00:15:59.506 12.231 - 12.326: 98.8477% ( 1) 00:15:59.506 12.705 - 12.800: 98.8551% ( 1) 00:15:59.506 12.800 - 12.895: 98.8700% ( 2) 00:15:59.506 12.895 - 12.990: 98.8774% ( 1) 00:15:59.506 13.179 - 13.274: 98.8848% ( 1) 00:15:59.506 13.369 - 13.464: 98.8997% ( 2) 00:15:59.506 13.559 - 13.653: 98.9071% ( 1) 00:15:59.506 13.748 - 13.843: 98.9146% ( 1) 00:15:59.506 13.843 - 13.938: 98.9220% ( 1) 00:15:59.506 14.127 - 14.222: 98.9294% ( 1) 00:15:59.506 14.412 - 14.507: 98.9369% ( 1) 00:15:59.506 14.507 - 14.601: 98.9443% ( 1) 00:15:59.506 15.170 - 15.265: 98.9518% ( 1) 00:15:59.506 15.644 - 15.739: 98.9592% ( 1) 00:15:59.506 17.161 - 17.256: 98.9741% ( 2) 00:15:59.506 17.256 - 17.351: 98.9889% ( 2) 00:15:59.506 17.351 - 17.446: 99.0038% ( 2) 00:15:59.506 17.446 - 17.541: 99.0335% ( 4) 00:15:59.506 17.541 - 17.636: 99.0633% ( 4) 00:15:59.506 17.636 - 17.730: 99.1153% ( 7) 00:15:59.506 17.730 - 17.825: 99.1376% ( 3) 00:15:59.506 17.825 - 17.920: 99.1822% ( 6) 00:15:59.506 17.920 - 18.015: 99.2343% ( 7) 00:15:59.506 18.015 - 18.110: 99.2566% ( 3) 00:15:59.506 18.110 - 18.204: 99.3235% ( 9) 00:15:59.506 18.204 - 18.299: 99.3904% ( 9) 00:15:59.506 18.299 - 18.394: 99.4499% ( 8) 00:15:59.506 18.394 - 18.489: 99.4945% ( 6) 00:15:59.506 18.489 - 18.584: 99.5391% ( 6) 00:15:59.506 18.584 - 18.679: 99.5985% ( 8) 00:15:59.506 18.679 - 18.773: 99.6803% ( 11) 00:15:59.506 18.773 - 18.868: 99.7026% ( 3) 00:15:59.506 18.868 - 18.963: 99.7175% ( 2) 00:15:59.506 18.963 - 19.058: 99.7324% ( 2) 00:15:59.506 19.058 - 19.153: 99.7398% ( 1) 00:15:59.506 19.153 - 19.247: 99.7472% ( 1) 00:15:59.506 19.342 - 19.437: 99.7695% ( 3) 00:15:59.506 19.627 - 19.721: 99.7770% ( 1) 00:15:59.506 21.333 - 21.428: 99.7844% ( 1) 00:15:59.506 22.092 - 22.187: 99.7918% ( 1) 00:15:59.506 22.661 - 22.756: 99.7993% ( 1) 00:15:59.506 22.756 - 22.850: 99.8067% ( 1) 00:15:59.506 23.419 - 23.514: 99.8141% ( 1) 00:15:59.506 23.514 - 23.609: 99.8216% ( 1) 00:15:59.506 23.609 - 23.704: 99.8290% ( 1) 00:15:59.506 23.893 - 23.988: 99.8364% ( 1) 00:15:59.506 24.462 - 24.652: 99.8439% ( 1) 00:15:59.506 28.065 - 28.255: 99.8513% ( 1) 00:15:59.506 29.203 - 29.393: 99.8587% ( 1) 00:15:59.506 40.581 - 40.770: 99.8662% ( 1) 00:15:59.506 41.150 - 41.339: 99.8736% ( 1) 00:15:59.506 42.098 - 42.287: 99.8810% ( 1) 00:15:59.506 109.985 - 110.744: 99.8885% ( 1) 00:15:59.506 3956.433 - 3980.705: 99.8959% ( 1) 00:15:59.506 3980.705 - 4004.978: 99.9554% ( 8) 00:15:59.506 4004.978 - 4029.250: 100.0000% ( 6) 00:15:59.506 00:15:59.506 Complete histogram 00:15:59.506 ================== 00:15:59.506 Range in us Cumulative Count 00:15:59.506 2.062 - 2.074: 0.0297% ( 4) 00:15:59.506 2.074 - 2.086: 8.1778% ( 1096) 00:15:59.506 2.086 - 2.098: 18.2589% ( 1356) 00:15:59.506 2.098 - 2.110: 20.4743% ( 298) 00:15:59.506 2.110 - 2.121: 46.2717% ( 3470) 00:15:59.506 2.121 - 2.133: 59.4008% ( 1766) 00:15:59.506 2.133 - 2.145: 62.9024% ( 471) 00:15:59.506 2.145 - 2.157: 66.7757% ( 521) 00:15:59.506 2.157 - 2.169: 69.0432% ( 305) 00:15:59.506 2.169 - 2.181: 70.5301% ( 200) 00:15:59.506 2.181 - 2.193: 78.3585% ( 1053) 00:15:59.506 2.193 - 2.204: 83.2429% ( 657) 00:15:59.506 2.204 - 2.216: 84.3283% ( 146) 00:15:59.507 2.216 - 2.228: 85.6442% ( 177) 00:15:59.507 2.228 - 2.240: 87.0939% ( 195) 00:15:59.507 2.240 - 2.252: 88.1124% ( 137) 00:15:59.507 2.252 - 2.264: 90.6029% ( 335) 00:15:59.507 2.264 - 2.276: 92.9671% ( 318) 00:15:59.507 2.276 - 2.287: 93.8666% ( 121) 00:15:59.507 2.287 - 2.299: 94.2458% ( 51) 00:15:59.507 2.299 - 2.311: 94.5357% ( 39) 00:15:59.507 2.311 - 2.323: 94.8034% ( 36) 00:15:59.507 2.323 - 2.335: 94.9074% ( 14) 00:15:59.507 2.335 - 2.347: 95.1156% ( 28) 00:15:59.507 2.347 - 2.359: 95.3163% ( 27) 00:15:59.507 2.359 - 2.370: 95.4353% ( 16) 00:15:59.507 2.370 - 2.382: 95.5765% ( 19) 00:15:59.507 2.382 - 2.394: 95.8442% ( 36) 00:15:59.507 2.394 - 2.406: 96.2308% ( 52) 00:15:59.507 2.406 - 2.418: 96.5802% ( 47) 00:15:59.507 2.418 - 2.430: 96.9222% ( 46) 00:15:59.507 2.430 - 2.441: 97.2716% ( 47) 00:15:59.507 2.441 - 2.453: 97.5169% ( 33) 00:15:59.507 2.453 - 2.465: 97.6284% ( 15) 00:15:59.507 2.465 - 2.477: 97.7102% ( 11) 00:15:59.507 2.477 - 2.489: 97.7622% ( 7) 00:15:59.507 2.489 - 2.501: 97.8366% ( 10) 00:15:59.507 2.501 - 2.513: 97.9332% ( 13) 00:15:59.507 2.513 - 2.524: 97.9704% ( 5) 00:15:59.507 2.524 - 2.536: 98.0076% ( 5) 00:15:59.507 2.536 - 2.548: 98.0522% ( 6) 00:15:59.507 2.548 - 2.560: 98.0596% ( 1) 00:15:59.507 2.560 - 2.572: 98.0745% ( 2) 00:15:59.507 2.572 - 2.584: 98.0968% ( 3) 00:15:59.507 2.584 - 2.596: 98.1042% ( 1) 00:15:59.507 2.596 - 2.607: 98.1265% ( 3) 00:15:59.507 2.607 - 2.619: 98.1340% ( 1) 00:15:59.507 2.750 - 2.761: 98.1414% ( 1) 00:15:59.507 2.761 - 2.773: 98.1637% ( 3) 00:15:59.507 2.773 - 2.785: 98.1711% ( 1) 00:15:59.507 2.785 - 2.797: 98.1786% ( 1) 00:15:59.507 2.833 - 2.844: 98.1860% ( 1) 00:15:59.507 2.868 - 2.880: 98.2009% ( 2) 00:15:59.507 2.880 - 2.892: 98.2083% ( 1) 00:15:59.507 2.904 - 2.916: 98.2157% ( 1) 00:15:59.507 2.916 - 2.927: 98.2232% ( 1) 00:15:59.507 2.927 - 2.939: 98.2306% ( 1) 00:15:59.507 2.951 - 2.963: 98.2380% ( 1) 00:15:59.507 2.987 - 2.999: 98.2455% ( 1) 00:15:59.507 3.058 - 3.081: 98.2529% ( 1) 00:15:59.507 3.081 - 3.105: 98.2678% ( 2) 00:15:59.507 3.105 - 3.129: 98.2827% ( 2) 00:15:59.507 3.129 - 3.153: 98.2975% ( 2) 00:15:59.507 3.153 - 3.176: 98.3198% ( 3) 00:15:59.507 3.176 - 3.200: 98.3347% ( 2) 00:15:59.507 3.200 - 3.224: 98.3496% ( 2) 00:15:59.507 3.224 - 3.247: 98.3570% ( 1) 00:15:59.507 3.271 - 3.295: 98.3644% ( 1) 00:15:59.507 3.342 - 3.366: 98.3793% ( 2) 00:15:59.507 3.390 - 3.413: 98.3867% ( 1) 00:15:59.507 3.461 - 3.484: 98.3942% ( 1) 00:15:59.507 3.484 - 3.508: 98.4165% ( 3) 00:15:59.507 3.508 - 3.532: 98.4313% ( 2) 00:15:59.507 3.532 - 3.556: 98.4388% ( 1) 00:15:59.507 3.556 - 3.579: 98.4611% ( 3) 00:15:59.507 3.579 - 3.603: 98.4759% ( 2) 00:15:59.507 3.603 - 3.627: 98.4834% ( 1) 00:15:59.507 3.627 - 3.650: 98.4908% ( 1) 00:15:59.507 3.650 - 3.674: 98.5057% ( 2) 00:15:59.507 3.674 - 3.698: 98.5429% ( 5) 00:15:59.507 3.698 - 3.721: 98.5577% ( 2) 00:15:59.507 3.721 - 3.745: 98.5652% ( 1) 00:15:59.507 3.745 - 3.769: 98.5726% ( 1) 00:15:59.507 3.769 - 3.793: 98.5800% ( 1) 00:15:59.507 3.793 - 3.816: 98.5949% ( 2) 00:15:59.507 3.840 - 3.864: 98.6098% ( 2) 00:15:59.507 3.887 - 3.911: 98.6172% ( 1) 00:15:59.507 3.935 - 3.959: 98.6246% ( 1) 00:15:59.507 3.959 - 3.982: 98.6321% ( 1) 00:15:59.507 3.982 - 4.006: 98.6395% ( 1) 00:15:59.507 4.006 - 4.030: 98.6469% ( 1) 00:15:59.507 4.030 - 4.053: 98.6544% ( 1) 00:15:59.507 4.124 - 4.148: 98.6618% ( 1) 00:15:59.507 5.073 - 5.096: 98.6692% ( 1) 00:15:59.507 6.400 - 6.447: 98.6767% ( 1) 00:15:59.507 6.637 - 6.684: 98.6915% ( 2) 00:15:59.507 6.684 - 6.732: 98.6990% ( 1) 00:15:59.507 6.874 - 6.921: 98.7064% ( 1) 00:15:59.507 6.921 - 6.969: 98.7139% ( 1) 00:15:59.507 7.016 - 7.064: 98.7213% ( 1) 00:15:59.507 7.490 - 7.538: 98.7287% ( 1) 00:15:59.507 7.727 - 7.775: 98.7436% ( 2) 00:15:59.507 7.775 - 7.822: 98.7510% ( 1) 00:15:59.507 7.870 - 7.917: 98.7585% ( 1) 00:15:59.507 7.917 - 7.964: 98.7659% ( 1) 00:15:59.507 8.249 - 8.296: 98.7733% ( 1) 00:15:59.507 8.533 - 8.581: 98.7808% ( 1) 00:15:59.507 8.770 - 8.818: 98.7956% ( 2) 00:15:59.507 9.007 - 9.055: 98.8031% ( 1) 00:15:59.507 9.576 - 9.624: 98.8105% ( 1) 00:15:59.507 9.624 - 9.671: 98.8179% ( 1) 00:15:59.507 9.766 - 9.813: 98.8328% ( 2) 00:15:59.507 10.287 - 10.335: 98.8402% ( 1) 00:15:59.507 11.188 - 11.236: 98.8477% ( 1) 00:15:59.507 11.283 - 11.330: 98.8551% ( 1) 00:15:59.507 11.994 - 12.041: 98.8625% ( 1) 00:15:59.507 13.179 - 13.274: 98.8700% ( 1) 00:15:59.507 13.559 - 13.653: 98.8774% ( 1) 00:15:59.507 15.644 - 15.739: 98.8848% ( 1) 00:15:59.507 15.739 - 15.834: 98.9071% ( 3) 00:15:59.507 15.834 - 15.929: 98.9220% ( 2) 00:15:59.507 15.929 - 16.024: 98.9518% ( 4) 00:15:59.507 16.024 - 16.119: 98.9815% ( 4) 00:15:59.507 16.119 - 16.213: 99.0112% ( 4) 00:15:59.507 16.213 - 16.308: 99.0261% ( 2) 00:15:59.507 16.308 - 16.403: 99.0633% ( 5) 00:15:59.507 16.403 - 16.498: 99.0781% ( 2) 00:15:59.507 16.498 - 16.593: 99.1079% ( 4) 00:15:59.507 16.593 - 16.687: 99.1897% ( 11) 00:15:59.507 16.687 - 16.782: 99.2120% ( 3) 00:15:59.507 16.782 - 16.877: 99.2491% ( 5) 00:15:59.507 16.877 - 16.972: 99.2714% ( 3) 00:15:59.507 16.972 - 17.067: 99.2937% ( 3) 00:15:59.507 17.161 - 17.256: 99.3012% ( 1) 00:15:59.507 17.256 - 17.351: 99.3086% ( 1) 00:15:59.507 17.351 - 17.446: 99.3235%[2024-04-17 06:42:04.108065] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:59.765 ( 2) 00:15:59.765 17.446 - 17.541: 99.3383% ( 2) 00:15:59.765 17.825 - 17.920: 99.3458% ( 1) 00:15:59.765 18.015 - 18.110: 99.3532% ( 1) 00:15:59.765 18.299 - 18.394: 99.3606% ( 1) 00:15:59.765 18.394 - 18.489: 99.3681% ( 1) 00:15:59.765 18.584 - 18.679: 99.3755% ( 1) 00:15:59.765 19.342 - 19.437: 99.3829% ( 1) 00:15:59.765 23.324 - 23.419: 99.3904% ( 1) 00:15:59.765 24.273 - 24.462: 99.3978% ( 1) 00:15:59.765 30.530 - 30.720: 99.4052% ( 1) 00:15:59.765 40.960 - 41.150: 99.4127% ( 1) 00:15:59.765 43.994 - 44.184: 99.4201% ( 1) 00:15:59.765 3810.797 - 3835.070: 99.4276% ( 1) 00:15:59.765 3980.705 - 4004.978: 99.7844% ( 48) 00:15:59.765 4004.978 - 4029.250: 100.0000% ( 29) 00:15:59.765 00:15:59.765 06:42:04 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:59.765 06:42:04 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:59.765 06:42:04 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:59.765 06:42:04 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:59.765 06:42:04 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:00.024 [ 00:16:00.024 { 00:16:00.024 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:00.024 "subtype": "Discovery", 00:16:00.024 "listen_addresses": [], 00:16:00.024 "allow_any_host": true, 00:16:00.024 "hosts": [] 00:16:00.024 }, 00:16:00.024 { 00:16:00.024 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:00.024 "subtype": "NVMe", 00:16:00.024 "listen_addresses": [ 00:16:00.024 { 00:16:00.024 "transport": "VFIOUSER", 00:16:00.024 "trtype": "VFIOUSER", 00:16:00.024 "adrfam": "IPv4", 00:16:00.024 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:00.024 "trsvcid": "0" 00:16:00.024 } 00:16:00.024 ], 00:16:00.024 "allow_any_host": true, 00:16:00.024 "hosts": [], 00:16:00.024 "serial_number": "SPDK1", 00:16:00.024 "model_number": "SPDK bdev Controller", 00:16:00.024 "max_namespaces": 32, 00:16:00.024 "min_cntlid": 1, 00:16:00.024 "max_cntlid": 65519, 00:16:00.024 "namespaces": [ 00:16:00.024 { 00:16:00.024 "nsid": 1, 00:16:00.024 "bdev_name": "Malloc1", 00:16:00.024 "name": "Malloc1", 00:16:00.024 "nguid": "0F2E9C0F05D340D2BC4D208A803049EC", 00:16:00.024 "uuid": "0f2e9c0f-05d3-40d2-bc4d-208a803049ec" 00:16:00.024 }, 00:16:00.024 { 00:16:00.024 "nsid": 2, 00:16:00.024 "bdev_name": "Malloc3", 00:16:00.024 "name": "Malloc3", 00:16:00.024 "nguid": "35E2AE25B2284812964B506F40AA5446", 00:16:00.024 "uuid": "35e2ae25-b228-4812-964b-506f40aa5446" 00:16:00.024 } 00:16:00.024 ] 00:16:00.024 }, 00:16:00.024 { 00:16:00.024 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:00.024 "subtype": "NVMe", 00:16:00.024 "listen_addresses": [ 00:16:00.024 { 00:16:00.024 "transport": "VFIOUSER", 00:16:00.024 "trtype": "VFIOUSER", 00:16:00.024 "adrfam": "IPv4", 00:16:00.024 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:00.024 "trsvcid": "0" 00:16:00.024 } 00:16:00.024 ], 00:16:00.024 "allow_any_host": true, 00:16:00.024 "hosts": [], 00:16:00.024 "serial_number": "SPDK2", 00:16:00.024 "model_number": "SPDK bdev Controller", 00:16:00.024 "max_namespaces": 32, 00:16:00.024 "min_cntlid": 1, 00:16:00.024 "max_cntlid": 65519, 00:16:00.024 "namespaces": [ 00:16:00.024 { 00:16:00.024 "nsid": 1, 00:16:00.024 "bdev_name": "Malloc2", 00:16:00.024 "name": "Malloc2", 00:16:00.024 "nguid": "45FB7CF9F2EF425D9BA6489D2EDC7AAC", 00:16:00.024 "uuid": "45fb7cf9-f2ef-425d-9ba6-489d2edc7aac" 00:16:00.024 } 00:16:00.024 ] 00:16:00.024 } 00:16:00.024 ] 00:16:00.024 06:42:04 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:00.024 06:42:04 -- target/nvmf_vfio_user.sh@34 -- # aerpid=4158377 00:16:00.024 06:42:04 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:00.024 06:42:04 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:00.024 06:42:04 -- common/autotest_common.sh@1251 -- # local i=0 00:16:00.024 06:42:04 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:00.024 06:42:04 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:00.024 06:42:04 -- common/autotest_common.sh@1262 -- # return 0 00:16:00.024 06:42:04 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:00.024 06:42:04 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:00.024 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.024 [2024-04-17 06:42:04.575658] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:00.282 Malloc4 00:16:00.282 06:42:04 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:00.540 [2024-04-17 06:42:04.930243] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:00.540 06:42:04 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:00.540 Asynchronous Event Request test 00:16:00.540 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:00.540 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:00.540 Registering asynchronous event callbacks... 00:16:00.540 Starting namespace attribute notice tests for all controllers... 00:16:00.540 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:00.540 aer_cb - Changed Namespace 00:16:00.540 Cleaning up... 00:16:00.797 [ 00:16:00.797 { 00:16:00.797 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:00.797 "subtype": "Discovery", 00:16:00.797 "listen_addresses": [], 00:16:00.797 "allow_any_host": true, 00:16:00.797 "hosts": [] 00:16:00.797 }, 00:16:00.797 { 00:16:00.797 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:00.797 "subtype": "NVMe", 00:16:00.797 "listen_addresses": [ 00:16:00.797 { 00:16:00.797 "transport": "VFIOUSER", 00:16:00.797 "trtype": "VFIOUSER", 00:16:00.797 "adrfam": "IPv4", 00:16:00.797 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:00.797 "trsvcid": "0" 00:16:00.797 } 00:16:00.797 ], 00:16:00.797 "allow_any_host": true, 00:16:00.797 "hosts": [], 00:16:00.797 "serial_number": "SPDK1", 00:16:00.797 "model_number": "SPDK bdev Controller", 00:16:00.797 "max_namespaces": 32, 00:16:00.797 "min_cntlid": 1, 00:16:00.797 "max_cntlid": 65519, 00:16:00.797 "namespaces": [ 00:16:00.797 { 00:16:00.797 "nsid": 1, 00:16:00.797 "bdev_name": "Malloc1", 00:16:00.797 "name": "Malloc1", 00:16:00.797 "nguid": "0F2E9C0F05D340D2BC4D208A803049EC", 00:16:00.797 "uuid": "0f2e9c0f-05d3-40d2-bc4d-208a803049ec" 00:16:00.797 }, 00:16:00.797 { 00:16:00.797 "nsid": 2, 00:16:00.797 "bdev_name": "Malloc3", 00:16:00.797 "name": "Malloc3", 00:16:00.797 "nguid": "35E2AE25B2284812964B506F40AA5446", 00:16:00.797 "uuid": "35e2ae25-b228-4812-964b-506f40aa5446" 00:16:00.797 } 00:16:00.798 ] 00:16:00.798 }, 00:16:00.798 { 00:16:00.798 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:00.798 "subtype": "NVMe", 00:16:00.798 "listen_addresses": [ 00:16:00.798 { 00:16:00.798 "transport": "VFIOUSER", 00:16:00.798 "trtype": "VFIOUSER", 00:16:00.798 "adrfam": "IPv4", 00:16:00.798 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:00.798 "trsvcid": "0" 00:16:00.798 } 00:16:00.798 ], 00:16:00.798 "allow_any_host": true, 00:16:00.798 "hosts": [], 00:16:00.798 "serial_number": "SPDK2", 00:16:00.798 "model_number": "SPDK bdev Controller", 00:16:00.798 "max_namespaces": 32, 00:16:00.798 "min_cntlid": 1, 00:16:00.798 "max_cntlid": 65519, 00:16:00.798 "namespaces": [ 00:16:00.798 { 00:16:00.798 "nsid": 1, 00:16:00.798 "bdev_name": "Malloc2", 00:16:00.798 "name": "Malloc2", 00:16:00.798 "nguid": "45FB7CF9F2EF425D9BA6489D2EDC7AAC", 00:16:00.798 "uuid": "45fb7cf9-f2ef-425d-9ba6-489d2edc7aac" 00:16:00.798 }, 00:16:00.798 { 00:16:00.798 "nsid": 2, 00:16:00.798 "bdev_name": "Malloc4", 00:16:00.798 "name": "Malloc4", 00:16:00.798 "nguid": "212587CE1E534D21B0A2D896CE4A61A3", 00:16:00.798 "uuid": "212587ce-1e53-4d21-b0a2-d896ce4a61a3" 00:16:00.798 } 00:16:00.798 ] 00:16:00.798 } 00:16:00.798 ] 00:16:00.798 06:42:05 -- target/nvmf_vfio_user.sh@44 -- # wait 4158377 00:16:00.798 06:42:05 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:00.798 06:42:05 -- target/nvmf_vfio_user.sh@95 -- # killprocess 4152667 00:16:00.798 06:42:05 -- common/autotest_common.sh@936 -- # '[' -z 4152667 ']' 00:16:00.798 06:42:05 -- common/autotest_common.sh@940 -- # kill -0 4152667 00:16:00.798 06:42:05 -- common/autotest_common.sh@941 -- # uname 00:16:00.798 06:42:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:00.798 06:42:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4152667 00:16:00.798 06:42:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:00.798 06:42:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:00.798 06:42:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4152667' 00:16:00.798 killing process with pid 4152667 00:16:00.798 06:42:05 -- common/autotest_common.sh@955 -- # kill 4152667 00:16:00.798 [2024-04-17 06:42:05.215371] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:16:00.798 06:42:05 -- common/autotest_common.sh@960 -- # wait 4152667 00:16:01.055 06:42:05 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:01.055 06:42:05 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:01.055 06:42:05 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:01.055 06:42:05 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:01.056 06:42:05 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:01.056 06:42:05 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=4158514 00:16:01.056 06:42:05 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 4158514' 00:16:01.056 Process pid: 4158514 00:16:01.056 06:42:05 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:01.056 06:42:05 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:01.056 06:42:05 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 4158514 00:16:01.056 06:42:05 -- common/autotest_common.sh@817 -- # '[' -z 4158514 ']' 00:16:01.056 06:42:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.056 06:42:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:01.056 06:42:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.056 06:42:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:01.056 06:42:05 -- common/autotest_common.sh@10 -- # set +x 00:16:01.056 [2024-04-17 06:42:05.600782] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:01.056 [2024-04-17 06:42:05.602040] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:16:01.056 [2024-04-17 06:42:05.602101] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.056 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.314 [2024-04-17 06:42:05.669308] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:01.314 [2024-04-17 06:42:05.760598] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:01.314 [2024-04-17 06:42:05.760648] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:01.314 [2024-04-17 06:42:05.760673] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:01.314 [2024-04-17 06:42:05.760685] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:01.314 [2024-04-17 06:42:05.760695] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:01.314 [2024-04-17 06:42:05.760789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.314 [2024-04-17 06:42:05.760855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:01.314 [2024-04-17 06:42:05.760948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:01.314 [2024-04-17 06:42:05.760951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.314 [2024-04-17 06:42:05.864000] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:16:01.314 [2024-04-17 06:42:05.864229] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:16:01.314 [2024-04-17 06:42:05.864486] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:16:01.314 [2024-04-17 06:42:05.865216] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:01.314 [2024-04-17 06:42:05.865324] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:16:01.314 06:42:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:01.314 06:42:05 -- common/autotest_common.sh@850 -- # return 0 00:16:01.314 06:42:05 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:02.687 06:42:06 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:02.687 06:42:07 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:02.687 06:42:07 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:02.687 06:42:07 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:02.687 06:42:07 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:02.687 06:42:07 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:02.946 Malloc1 00:16:02.946 06:42:07 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:03.204 06:42:07 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:03.462 06:42:07 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:03.720 06:42:08 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:03.720 06:42:08 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:03.720 06:42:08 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:03.978 Malloc2 00:16:03.978 06:42:08 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:04.235 06:42:08 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:04.493 06:42:08 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:04.750 06:42:09 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:04.750 06:42:09 -- target/nvmf_vfio_user.sh@95 -- # killprocess 4158514 00:16:04.750 06:42:09 -- common/autotest_common.sh@936 -- # '[' -z 4158514 ']' 00:16:04.750 06:42:09 -- common/autotest_common.sh@940 -- # kill -0 4158514 00:16:04.750 06:42:09 -- common/autotest_common.sh@941 -- # uname 00:16:04.750 06:42:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:04.750 06:42:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4158514 00:16:04.750 06:42:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:04.750 06:42:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:04.750 06:42:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4158514' 00:16:04.750 killing process with pid 4158514 00:16:04.750 06:42:09 -- common/autotest_common.sh@955 -- # kill 4158514 00:16:04.750 06:42:09 -- common/autotest_common.sh@960 -- # wait 4158514 00:16:05.016 06:42:09 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:05.016 06:42:09 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:05.016 00:16:05.016 real 0m52.483s 00:16:05.016 user 3m27.286s 00:16:05.016 sys 0m4.445s 00:16:05.016 06:42:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:05.016 06:42:09 -- common/autotest_common.sh@10 -- # set +x 00:16:05.016 ************************************ 00:16:05.016 END TEST nvmf_vfio_user 00:16:05.016 ************************************ 00:16:05.016 06:42:09 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:05.016 06:42:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:05.016 06:42:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:05.016 06:42:09 -- common/autotest_common.sh@10 -- # set +x 00:16:05.335 ************************************ 00:16:05.335 START TEST nvmf_vfio_user_nvme_compliance 00:16:05.335 ************************************ 00:16:05.335 06:42:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:05.335 * Looking for test storage... 00:16:05.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:05.335 06:42:09 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:05.335 06:42:09 -- nvmf/common.sh@7 -- # uname -s 00:16:05.335 06:42:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:05.335 06:42:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:05.335 06:42:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:05.335 06:42:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:05.335 06:42:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:05.335 06:42:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:05.335 06:42:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:05.335 06:42:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:05.335 06:42:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:05.335 06:42:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:05.335 06:42:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:05.335 06:42:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:05.335 06:42:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:05.335 06:42:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:05.335 06:42:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:05.335 06:42:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:05.335 06:42:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:05.335 06:42:09 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:05.335 06:42:09 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:05.335 06:42:09 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:05.335 06:42:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.336 06:42:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.336 06:42:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.336 06:42:09 -- paths/export.sh@5 -- # export PATH 00:16:05.336 06:42:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.336 06:42:09 -- nvmf/common.sh@47 -- # : 0 00:16:05.336 06:42:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:05.336 06:42:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:05.336 06:42:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:05.336 06:42:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:05.336 06:42:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:05.336 06:42:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:05.336 06:42:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:05.336 06:42:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:05.336 06:42:09 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:05.336 06:42:09 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:05.336 06:42:09 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:05.336 06:42:09 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:05.336 06:42:09 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:05.336 06:42:09 -- compliance/compliance.sh@20 -- # nvmfpid=4159623 00:16:05.336 06:42:09 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:05.336 06:42:09 -- compliance/compliance.sh@21 -- # echo 'Process pid: 4159623' 00:16:05.336 Process pid: 4159623 00:16:05.336 06:42:09 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:05.336 06:42:09 -- compliance/compliance.sh@24 -- # waitforlisten 4159623 00:16:05.336 06:42:09 -- common/autotest_common.sh@817 -- # '[' -z 4159623 ']' 00:16:05.336 06:42:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.336 06:42:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:05.336 06:42:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.336 06:42:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:05.336 06:42:09 -- common/autotest_common.sh@10 -- # set +x 00:16:05.336 [2024-04-17 06:42:09.763586] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:16:05.336 [2024-04-17 06:42:09.763681] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.336 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.336 [2024-04-17 06:42:09.822130] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:05.336 [2024-04-17 06:42:09.906565] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:05.336 [2024-04-17 06:42:09.906618] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:05.336 [2024-04-17 06:42:09.906642] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:05.336 [2024-04-17 06:42:09.906653] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:05.336 [2024-04-17 06:42:09.906663] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:05.336 [2024-04-17 06:42:09.906737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.336 [2024-04-17 06:42:09.906802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.336 [2024-04-17 06:42:09.906805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.594 06:42:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:05.594 06:42:10 -- common/autotest_common.sh@850 -- # return 0 00:16:05.594 06:42:10 -- compliance/compliance.sh@26 -- # sleep 1 00:16:06.528 06:42:11 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:06.528 06:42:11 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:06.528 06:42:11 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:06.528 06:42:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.528 06:42:11 -- common/autotest_common.sh@10 -- # set +x 00:16:06.528 06:42:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.528 06:42:11 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:06.528 06:42:11 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:06.528 06:42:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.528 06:42:11 -- common/autotest_common.sh@10 -- # set +x 00:16:06.528 malloc0 00:16:06.528 06:42:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.528 06:42:11 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:06.528 06:42:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.528 06:42:11 -- common/autotest_common.sh@10 -- # set +x 00:16:06.528 06:42:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.528 06:42:11 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:06.528 06:42:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.528 06:42:11 -- common/autotest_common.sh@10 -- # set +x 00:16:06.528 06:42:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.528 06:42:11 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:06.528 06:42:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.528 06:42:11 -- common/autotest_common.sh@10 -- # set +x 00:16:06.528 06:42:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.528 06:42:11 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:06.788 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.788 00:16:06.788 00:16:06.788 CUnit - A unit testing framework for C - Version 2.1-3 00:16:06.788 http://cunit.sourceforge.net/ 00:16:06.788 00:16:06.788 00:16:06.788 Suite: nvme_compliance 00:16:06.788 Test: admin_identify_ctrlr_verify_dptr ...[2024-04-17 06:42:11.257730] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.788 [2024-04-17 06:42:11.259125] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:06.788 [2024-04-17 06:42:11.259150] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:06.788 [2024-04-17 06:42:11.259185] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:06.788 [2024-04-17 06:42:11.260753] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:06.788 passed 00:16:06.788 Test: admin_identify_ctrlr_verify_fused ...[2024-04-17 06:42:11.346357] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.788 [2024-04-17 06:42:11.349378] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:06.788 passed 00:16:07.046 Test: admin_identify_ns ...[2024-04-17 06:42:11.434805] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.046 [2024-04-17 06:42:11.495209] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:07.046 [2024-04-17 06:42:11.503193] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:07.046 [2024-04-17 06:42:11.524332] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.046 passed 00:16:07.046 Test: admin_get_features_mandatory_features ...[2024-04-17 06:42:11.606899] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.046 [2024-04-17 06:42:11.609918] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.046 passed 00:16:07.437 Test: admin_get_features_optional_features ...[2024-04-17 06:42:11.694534] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.437 [2024-04-17 06:42:11.697552] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.437 passed 00:16:07.437 Test: admin_set_features_number_of_queues ...[2024-04-17 06:42:11.781690] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.437 [2024-04-17 06:42:11.886277] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.437 passed 00:16:07.437 Test: admin_get_log_page_mandatory_logs ...[2024-04-17 06:42:11.969929] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.437 [2024-04-17 06:42:11.972955] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.437 passed 00:16:07.695 Test: admin_get_log_page_with_lpo ...[2024-04-17 06:42:12.054190] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.695 [2024-04-17 06:42:12.124193] ctrlr.c:2604:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:07.695 [2024-04-17 06:42:12.137274] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.695 passed 00:16:07.695 Test: fabric_property_get ...[2024-04-17 06:42:12.219485] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.695 [2024-04-17 06:42:12.220746] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:07.695 [2024-04-17 06:42:12.222505] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.695 passed 00:16:07.953 Test: admin_delete_io_sq_use_admin_qid ...[2024-04-17 06:42:12.309039] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.953 [2024-04-17 06:42:12.310360] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:07.953 [2024-04-17 06:42:12.312066] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.953 passed 00:16:07.953 Test: admin_delete_io_sq_delete_sq_twice ...[2024-04-17 06:42:12.393182] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.953 [2024-04-17 06:42:12.481202] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:07.953 [2024-04-17 06:42:12.500189] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:07.953 [2024-04-17 06:42:12.505289] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.953 passed 00:16:08.211 Test: admin_delete_io_cq_use_admin_qid ...[2024-04-17 06:42:12.589709] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:08.211 [2024-04-17 06:42:12.590989] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:08.211 [2024-04-17 06:42:12.592730] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:08.211 passed 00:16:08.211 Test: admin_delete_io_cq_delete_cq_first ...[2024-04-17 06:42:12.675879] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:08.211 [2024-04-17 06:42:12.753188] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:08.211 [2024-04-17 06:42:12.780188] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:08.211 [2024-04-17 06:42:12.782312] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:08.211 passed 00:16:08.469 Test: admin_create_io_cq_verify_iv_pc ...[2024-04-17 06:42:12.867648] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:08.469 [2024-04-17 06:42:12.868946] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:08.469 [2024-04-17 06:42:12.868986] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:08.469 [2024-04-17 06:42:12.870669] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:08.469 passed 00:16:08.469 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-04-17 06:42:12.951806] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:08.469 [2024-04-17 06:42:13.047202] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:08.469 [2024-04-17 06:42:13.055185] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:08.469 [2024-04-17 06:42:13.063188] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:08.469 [2024-04-17 06:42:13.071186] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:08.727 [2024-04-17 06:42:13.100295] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:08.727 passed 00:16:08.727 Test: admin_create_io_sq_verify_pc ...[2024-04-17 06:42:13.179881] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:08.727 [2024-04-17 06:42:13.196199] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:08.727 [2024-04-17 06:42:13.213290] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:08.727 passed 00:16:08.727 Test: admin_create_io_qp_max_qps ...[2024-04-17 06:42:13.297833] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:10.100 [2024-04-17 06:42:14.395206] nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:10.358 [2024-04-17 06:42:14.777601] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:10.358 passed 00:16:10.358 Test: admin_create_io_sq_shared_cq ...[2024-04-17 06:42:14.866556] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:10.616 [2024-04-17 06:42:14.998186] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:10.616 [2024-04-17 06:42:15.035276] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:10.616 passed 00:16:10.616 00:16:10.616 Run Summary: Type Total Ran Passed Failed Inactive 00:16:10.616 suites 1 1 n/a 0 0 00:16:10.616 tests 18 18 18 0 0 00:16:10.616 asserts 360 360 360 0 n/a 00:16:10.616 00:16:10.616 Elapsed time = 1.566 seconds 00:16:10.616 06:42:15 -- compliance/compliance.sh@42 -- # killprocess 4159623 00:16:10.616 06:42:15 -- common/autotest_common.sh@936 -- # '[' -z 4159623 ']' 00:16:10.616 06:42:15 -- common/autotest_common.sh@940 -- # kill -0 4159623 00:16:10.616 06:42:15 -- common/autotest_common.sh@941 -- # uname 00:16:10.616 06:42:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:10.616 06:42:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4159623 00:16:10.616 06:42:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:10.616 06:42:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:10.616 06:42:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4159623' 00:16:10.616 killing process with pid 4159623 00:16:10.616 06:42:15 -- common/autotest_common.sh@955 -- # kill 4159623 00:16:10.616 06:42:15 -- common/autotest_common.sh@960 -- # wait 4159623 00:16:10.875 06:42:15 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:10.875 06:42:15 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:10.875 00:16:10.875 real 0m5.732s 00:16:10.875 user 0m16.148s 00:16:10.875 sys 0m0.548s 00:16:10.875 06:42:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:10.875 06:42:15 -- common/autotest_common.sh@10 -- # set +x 00:16:10.875 ************************************ 00:16:10.875 END TEST nvmf_vfio_user_nvme_compliance 00:16:10.875 ************************************ 00:16:10.875 06:42:15 -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:10.875 06:42:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:10.875 06:42:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:10.875 06:42:15 -- common/autotest_common.sh@10 -- # set +x 00:16:11.134 ************************************ 00:16:11.134 START TEST nvmf_vfio_user_fuzz 00:16:11.134 ************************************ 00:16:11.134 06:42:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:11.134 * Looking for test storage... 00:16:11.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:11.134 06:42:15 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:11.134 06:42:15 -- nvmf/common.sh@7 -- # uname -s 00:16:11.134 06:42:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:11.134 06:42:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:11.134 06:42:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:11.134 06:42:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:11.134 06:42:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:11.134 06:42:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:11.134 06:42:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:11.134 06:42:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:11.134 06:42:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:11.134 06:42:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:11.134 06:42:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:11.134 06:42:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:11.134 06:42:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:11.134 06:42:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:11.134 06:42:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:11.134 06:42:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:11.134 06:42:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:11.134 06:42:15 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.134 06:42:15 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.134 06:42:15 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.134 06:42:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.134 06:42:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.134 06:42:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.134 06:42:15 -- paths/export.sh@5 -- # export PATH 00:16:11.134 06:42:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.134 06:42:15 -- nvmf/common.sh@47 -- # : 0 00:16:11.134 06:42:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:11.134 06:42:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:11.134 06:42:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:11.134 06:42:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:11.134 06:42:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:11.134 06:42:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:11.134 06:42:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:11.134 06:42:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:11.134 06:42:15 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:11.134 06:42:15 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:11.134 06:42:15 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:11.134 06:42:15 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:11.134 06:42:15 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:11.134 06:42:15 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:11.134 06:42:15 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:11.134 06:42:15 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=4160357 00:16:11.134 06:42:15 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:11.134 06:42:15 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 4160357' 00:16:11.134 Process pid: 4160357 00:16:11.134 06:42:15 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:11.134 06:42:15 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 4160357 00:16:11.134 06:42:15 -- common/autotest_common.sh@817 -- # '[' -z 4160357 ']' 00:16:11.134 06:42:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.134 06:42:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:11.134 06:42:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.134 06:42:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:11.134 06:42:15 -- common/autotest_common.sh@10 -- # set +x 00:16:11.393 06:42:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:11.393 06:42:15 -- common/autotest_common.sh@850 -- # return 0 00:16:11.393 06:42:15 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:12.325 06:42:16 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:12.325 06:42:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:12.325 06:42:16 -- common/autotest_common.sh@10 -- # set +x 00:16:12.325 06:42:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:12.325 06:42:16 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:12.325 06:42:16 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:12.325 06:42:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:12.325 06:42:16 -- common/autotest_common.sh@10 -- # set +x 00:16:12.325 malloc0 00:16:12.325 06:42:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:12.325 06:42:16 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:12.325 06:42:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:12.325 06:42:16 -- common/autotest_common.sh@10 -- # set +x 00:16:12.584 06:42:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:12.584 06:42:16 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:12.584 06:42:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:12.584 06:42:16 -- common/autotest_common.sh@10 -- # set +x 00:16:12.584 06:42:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:12.584 06:42:16 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:12.584 06:42:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:12.584 06:42:16 -- common/autotest_common.sh@10 -- # set +x 00:16:12.584 06:42:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:12.584 06:42:16 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:12.584 06:42:16 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:44.663 Fuzzing completed. Shutting down the fuzz application 00:16:44.663 00:16:44.663 Dumping successful admin opcodes: 00:16:44.663 8, 9, 10, 24, 00:16:44.663 Dumping successful io opcodes: 00:16:44.663 0, 00:16:44.663 NS: 0x200003a1ef00 I/O qp, Total commands completed: 576059, total successful commands: 2216, random_seed: 2274824960 00:16:44.663 NS: 0x200003a1ef00 admin qp, Total commands completed: 74360, total successful commands: 582, random_seed: 3152181504 00:16:44.663 06:42:47 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:44.663 06:42:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:44.663 06:42:47 -- common/autotest_common.sh@10 -- # set +x 00:16:44.663 06:42:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:44.663 06:42:47 -- target/vfio_user_fuzz.sh@46 -- # killprocess 4160357 00:16:44.663 06:42:47 -- common/autotest_common.sh@936 -- # '[' -z 4160357 ']' 00:16:44.663 06:42:47 -- common/autotest_common.sh@940 -- # kill -0 4160357 00:16:44.663 06:42:47 -- common/autotest_common.sh@941 -- # uname 00:16:44.663 06:42:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:44.663 06:42:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4160357 00:16:44.663 06:42:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:44.663 06:42:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:44.663 06:42:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4160357' 00:16:44.663 killing process with pid 4160357 00:16:44.663 06:42:47 -- common/autotest_common.sh@955 -- # kill 4160357 00:16:44.663 06:42:47 -- common/autotest_common.sh@960 -- # wait 4160357 00:16:44.663 06:42:47 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:44.663 06:42:47 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:44.663 00:16:44.663 real 0m32.197s 00:16:44.663 user 0m31.046s 00:16:44.663 sys 0m28.395s 00:16:44.663 06:42:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:44.663 06:42:47 -- common/autotest_common.sh@10 -- # set +x 00:16:44.663 ************************************ 00:16:44.663 END TEST nvmf_vfio_user_fuzz 00:16:44.663 ************************************ 00:16:44.663 06:42:47 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:44.663 06:42:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:44.663 06:42:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:44.663 06:42:47 -- common/autotest_common.sh@10 -- # set +x 00:16:44.663 ************************************ 00:16:44.663 START TEST nvmf_host_management 00:16:44.663 ************************************ 00:16:44.663 06:42:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:44.663 * Looking for test storage... 00:16:44.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:44.663 06:42:47 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:44.663 06:42:47 -- nvmf/common.sh@7 -- # uname -s 00:16:44.663 06:42:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:44.663 06:42:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:44.663 06:42:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:44.663 06:42:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:44.663 06:42:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:44.663 06:42:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:44.663 06:42:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:44.663 06:42:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:44.663 06:42:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:44.663 06:42:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:44.663 06:42:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:44.663 06:42:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:44.663 06:42:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:44.663 06:42:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:44.663 06:42:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:44.663 06:42:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:44.663 06:42:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:44.663 06:42:47 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:44.663 06:42:47 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:44.663 06:42:47 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:44.663 06:42:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.663 06:42:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.663 06:42:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.663 06:42:47 -- paths/export.sh@5 -- # export PATH 00:16:44.663 06:42:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.663 06:42:47 -- nvmf/common.sh@47 -- # : 0 00:16:44.663 06:42:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:44.663 06:42:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:44.663 06:42:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:44.663 06:42:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:44.663 06:42:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:44.663 06:42:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:44.663 06:42:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:44.663 06:42:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:44.663 06:42:47 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:44.663 06:42:47 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:44.663 06:42:47 -- target/host_management.sh@105 -- # nvmftestinit 00:16:44.663 06:42:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:44.663 06:42:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:44.663 06:42:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:44.663 06:42:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:44.663 06:42:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:44.663 06:42:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.663 06:42:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.663 06:42:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.663 06:42:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:44.663 06:42:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:44.663 06:42:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:44.663 06:42:47 -- common/autotest_common.sh@10 -- # set +x 00:16:45.231 06:42:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:45.231 06:42:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:45.231 06:42:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:45.231 06:42:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:45.231 06:42:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:45.231 06:42:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:45.231 06:42:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:45.231 06:42:49 -- nvmf/common.sh@295 -- # net_devs=() 00:16:45.231 06:42:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:45.231 06:42:49 -- nvmf/common.sh@296 -- # e810=() 00:16:45.231 06:42:49 -- nvmf/common.sh@296 -- # local -ga e810 00:16:45.231 06:42:49 -- nvmf/common.sh@297 -- # x722=() 00:16:45.231 06:42:49 -- nvmf/common.sh@297 -- # local -ga x722 00:16:45.231 06:42:49 -- nvmf/common.sh@298 -- # mlx=() 00:16:45.231 06:42:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:45.231 06:42:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:45.231 06:42:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:45.231 06:42:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:45.231 06:42:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:45.231 06:42:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:45.231 06:42:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:45.231 06:42:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:45.231 06:42:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:45.231 06:42:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:45.231 06:42:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:45.231 06:42:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:45.231 06:42:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:45.231 06:42:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:45.231 06:42:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:45.231 06:42:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:45.231 06:42:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:45.231 06:42:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:45.231 06:42:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:45.231 06:42:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:45.231 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:45.231 06:42:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:45.231 06:42:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:45.231 06:42:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.231 06:42:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.231 06:42:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:45.231 06:42:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:45.231 06:42:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:45.231 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:45.231 06:42:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:45.231 06:42:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:45.231 06:42:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.231 06:42:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.231 06:42:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:45.231 06:42:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:45.231 06:42:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:45.231 06:42:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:45.231 06:42:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:45.231 06:42:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.231 06:42:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:45.231 06:42:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.231 06:42:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:45.231 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:45.231 06:42:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.492 06:42:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:45.492 06:42:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.492 06:42:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:45.492 06:42:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.492 06:42:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:45.492 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:45.492 06:42:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.492 06:42:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:45.492 06:42:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:45.492 06:42:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:45.492 06:42:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:45.492 06:42:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:45.492 06:42:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:45.492 06:42:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:45.492 06:42:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:45.492 06:42:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:45.492 06:42:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:45.492 06:42:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:45.492 06:42:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:45.492 06:42:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:45.492 06:42:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:45.492 06:42:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:45.492 06:42:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:45.492 06:42:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:45.492 06:42:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:45.492 06:42:49 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:45.492 06:42:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:45.492 06:42:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:45.492 06:42:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:45.492 06:42:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:45.492 06:42:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:45.492 06:42:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:45.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:45.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:16:45.492 00:16:45.492 --- 10.0.0.2 ping statistics --- 00:16:45.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.492 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:16:45.493 06:42:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:45.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:45.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:16:45.493 00:16:45.493 --- 10.0.0.1 ping statistics --- 00:16:45.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.493 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:16:45.493 06:42:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:45.493 06:42:49 -- nvmf/common.sh@411 -- # return 0 00:16:45.493 06:42:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:45.493 06:42:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:45.493 06:42:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:45.493 06:42:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:45.493 06:42:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:45.493 06:42:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:45.493 06:42:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:45.493 06:42:49 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:16:45.493 06:42:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:45.493 06:42:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:45.493 06:42:49 -- common/autotest_common.sh@10 -- # set +x 00:16:45.493 ************************************ 00:16:45.493 START TEST nvmf_host_management 00:16:45.493 ************************************ 00:16:45.493 06:42:50 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:16:45.493 06:42:50 -- target/host_management.sh@69 -- # starttarget 00:16:45.493 06:42:50 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:45.493 06:42:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:45.493 06:42:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:45.493 06:42:50 -- common/autotest_common.sh@10 -- # set +x 00:16:45.493 06:42:50 -- nvmf/common.sh@470 -- # nvmfpid=4165815 00:16:45.493 06:42:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:45.493 06:42:50 -- nvmf/common.sh@471 -- # waitforlisten 4165815 00:16:45.493 06:42:50 -- common/autotest_common.sh@817 -- # '[' -z 4165815 ']' 00:16:45.493 06:42:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.493 06:42:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:45.493 06:42:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.493 06:42:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:45.493 06:42:50 -- common/autotest_common.sh@10 -- # set +x 00:16:45.786 [2024-04-17 06:42:50.131331] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:16:45.786 [2024-04-17 06:42:50.131434] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.786 EAL: No free 2048 kB hugepages reported on node 1 00:16:45.786 [2024-04-17 06:42:50.203476] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:45.786 [2024-04-17 06:42:50.302215] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:45.786 [2024-04-17 06:42:50.302275] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:45.786 [2024-04-17 06:42:50.302300] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:45.786 [2024-04-17 06:42:50.302314] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:45.786 [2024-04-17 06:42:50.302326] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:45.786 [2024-04-17 06:42:50.302408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:45.787 [2024-04-17 06:42:50.302462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:45.787 [2024-04-17 06:42:50.302517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:45.787 [2024-04-17 06:42:50.302519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.045 06:42:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:46.045 06:42:50 -- common/autotest_common.sh@850 -- # return 0 00:16:46.045 06:42:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:46.045 06:42:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:46.045 06:42:50 -- common/autotest_common.sh@10 -- # set +x 00:16:46.045 06:42:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.045 06:42:50 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:46.045 06:42:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:46.045 06:42:50 -- common/autotest_common.sh@10 -- # set +x 00:16:46.045 [2024-04-17 06:42:50.465096] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:46.045 06:42:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:46.045 06:42:50 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:46.045 06:42:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:46.045 06:42:50 -- common/autotest_common.sh@10 -- # set +x 00:16:46.045 06:42:50 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:46.045 06:42:50 -- target/host_management.sh@23 -- # cat 00:16:46.045 06:42:50 -- target/host_management.sh@30 -- # rpc_cmd 00:16:46.045 06:42:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:46.045 06:42:50 -- common/autotest_common.sh@10 -- # set +x 00:16:46.045 Malloc0 00:16:46.045 [2024-04-17 06:42:50.524994] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.045 06:42:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:46.045 06:42:50 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:46.045 06:42:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:46.045 06:42:50 -- common/autotest_common.sh@10 -- # set +x 00:16:46.045 06:42:50 -- target/host_management.sh@73 -- # perfpid=4165868 00:16:46.045 06:42:50 -- target/host_management.sh@74 -- # waitforlisten 4165868 /var/tmp/bdevperf.sock 00:16:46.045 06:42:50 -- common/autotest_common.sh@817 -- # '[' -z 4165868 ']' 00:16:46.045 06:42:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:46.045 06:42:50 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:46.045 06:42:50 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:46.045 06:42:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:46.045 06:42:50 -- nvmf/common.sh@521 -- # config=() 00:16:46.045 06:42:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:46.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:46.045 06:42:50 -- nvmf/common.sh@521 -- # local subsystem config 00:16:46.045 06:42:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:46.045 06:42:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:46.045 06:42:50 -- common/autotest_common.sh@10 -- # set +x 00:16:46.045 06:42:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:46.045 { 00:16:46.045 "params": { 00:16:46.045 "name": "Nvme$subsystem", 00:16:46.045 "trtype": "$TEST_TRANSPORT", 00:16:46.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:46.045 "adrfam": "ipv4", 00:16:46.045 "trsvcid": "$NVMF_PORT", 00:16:46.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:46.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:46.045 "hdgst": ${hdgst:-false}, 00:16:46.045 "ddgst": ${ddgst:-false} 00:16:46.045 }, 00:16:46.045 "method": "bdev_nvme_attach_controller" 00:16:46.045 } 00:16:46.045 EOF 00:16:46.045 )") 00:16:46.045 06:42:50 -- nvmf/common.sh@543 -- # cat 00:16:46.045 06:42:50 -- nvmf/common.sh@545 -- # jq . 00:16:46.046 06:42:50 -- nvmf/common.sh@546 -- # IFS=, 00:16:46.046 06:42:50 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:46.046 "params": { 00:16:46.046 "name": "Nvme0", 00:16:46.046 "trtype": "tcp", 00:16:46.046 "traddr": "10.0.0.2", 00:16:46.046 "adrfam": "ipv4", 00:16:46.046 "trsvcid": "4420", 00:16:46.046 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:46.046 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:46.046 "hdgst": false, 00:16:46.046 "ddgst": false 00:16:46.046 }, 00:16:46.046 "method": "bdev_nvme_attach_controller" 00:16:46.046 }' 00:16:46.046 [2024-04-17 06:42:50.595720] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:16:46.046 [2024-04-17 06:42:50.595796] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4165868 ] 00:16:46.046 EAL: No free 2048 kB hugepages reported on node 1 00:16:46.303 [2024-04-17 06:42:50.663203] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.303 [2024-04-17 06:42:50.748603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.562 Running I/O for 10 seconds... 00:16:46.562 06:42:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:46.562 06:42:51 -- common/autotest_common.sh@850 -- # return 0 00:16:46.562 06:42:51 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:46.562 06:42:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:46.562 06:42:51 -- common/autotest_common.sh@10 -- # set +x 00:16:46.562 06:42:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:46.562 06:42:51 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:46.562 06:42:51 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:46.562 06:42:51 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:46.562 06:42:51 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:46.562 06:42:51 -- target/host_management.sh@52 -- # local ret=1 00:16:46.562 06:42:51 -- target/host_management.sh@53 -- # local i 00:16:46.562 06:42:51 -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:46.562 06:42:51 -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:46.562 06:42:51 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:46.562 06:42:51 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:46.562 06:42:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:46.562 06:42:51 -- common/autotest_common.sh@10 -- # set +x 00:16:46.562 06:42:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:46.819 06:42:51 -- target/host_management.sh@55 -- # read_io_count=67 00:16:46.819 06:42:51 -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:16:46.819 06:42:51 -- target/host_management.sh@62 -- # sleep 0.25 00:16:47.078 06:42:51 -- target/host_management.sh@54 -- # (( i-- )) 00:16:47.078 06:42:51 -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:47.078 06:42:51 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:47.078 06:42:51 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:47.078 06:42:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.078 06:42:51 -- common/autotest_common.sh@10 -- # set +x 00:16:47.078 06:42:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.078 06:42:51 -- target/host_management.sh@55 -- # read_io_count=515 00:16:47.078 06:42:51 -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:16:47.078 06:42:51 -- target/host_management.sh@59 -- # ret=0 00:16:47.078 06:42:51 -- target/host_management.sh@60 -- # break 00:16:47.078 06:42:51 -- target/host_management.sh@64 -- # return 0 00:16:47.078 06:42:51 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:47.078 06:42:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.078 06:42:51 -- common/autotest_common.sh@10 -- # set +x 00:16:47.078 [2024-04-17 06:42:51.484379] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3250 is same with the state(5) to be set 00:16:47.078 [2024-04-17 06:42:51.484472] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3250 is same with the state(5) to be set 00:16:47.078 [2024-04-17 06:42:51.484719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.078 [2024-04-17 06:42:51.484761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.078 [2024-04-17 06:42:51.484787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.078 [2024-04-17 06:42:51.484803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.078 [2024-04-17 06:42:51.484820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.078 [2024-04-17 06:42:51.484834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.078 [2024-04-17 06:42:51.484849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.078 [2024-04-17 06:42:51.484862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.078 [2024-04-17 06:42:51.484877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.078 [2024-04-17 06:42:51.484890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.078 [2024-04-17 06:42:51.484905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.078 [2024-04-17 06:42:51.484918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.078 [2024-04-17 06:42:51.484933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.078 [2024-04-17 06:42:51.484946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.078 [2024-04-17 06:42:51.484971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.078 [2024-04-17 06:42:51.484985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.078 [2024-04-17 06:42:51.485000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.078 [2024-04-17 06:42:51.485013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.078 [2024-04-17 06:42:51.485028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.078 [2024-04-17 06:42:51.485041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.078 [2024-04-17 06:42:51.485056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.078 [2024-04-17 06:42:51.485069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.078 [2024-04-17 06:42:51.485084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.078 [2024-04-17 06:42:51.485097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.078 [2024-04-17 06:42:51.485112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.078 [2024-04-17 06:42:51.485125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.078 [2024-04-17 06:42:51.485139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.078 [2024-04-17 06:42:51.485153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.485167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.485190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.485207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.485220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.485241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.485255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.485270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.485285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.485300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.485314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.485330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.485348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.485364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.485378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.485393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.485407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.485423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.485437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.485467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.485489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.485505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.485519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.485535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.485550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.485566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.485581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.485597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.485612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.485628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.485642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.485659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.485674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.485688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.485703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.485719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.485734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.485754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.485770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.485786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.485801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.485817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.485832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.485848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.485863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.485879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.485894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.485910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.485941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.485958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.485973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.485990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.486005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.486021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.486036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.486052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.486068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.486084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.486099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.486116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.486131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.486148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.486167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.486191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.486208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.486236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.486250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.486265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.486279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.486294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.486309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.486324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.486338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.486353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.486368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.486383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.486397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.079 [2024-04-17 06:42:51.486413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.079 [2024-04-17 06:42:51.486427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.080 [2024-04-17 06:42:51.486445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.080 [2024-04-17 06:42:51.486459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.080 [2024-04-17 06:42:51.486492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.080 [2024-04-17 06:42:51.486506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.080 [2024-04-17 06:42:51.486522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.080 [2024-04-17 06:42:51.486536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.080 [2024-04-17 06:42:51.486553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.080 [2024-04-17 06:42:51.486567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.080 [2024-04-17 06:42:51.486587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.080 [2024-04-17 06:42:51.486602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.080 [2024-04-17 06:42:51.486618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.080 [2024-04-17 06:42:51.486633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.080 [2024-04-17 06:42:51.486649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.080 [2024-04-17 06:42:51.486663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.080 [2024-04-17 06:42:51.486680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.080 [2024-04-17 06:42:51.486694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.080 [2024-04-17 06:42:51.486710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.080 [2024-04-17 06:42:51.486725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.080 [2024-04-17 06:42:51.486741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.080 [2024-04-17 06:42:51.486756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.080 [2024-04-17 06:42:51.486772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.080 [2024-04-17 06:42:51.486787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.080 [2024-04-17 06:42:51.486874] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11f5740 was disconnected and freed. reset controller. 00:16:47.080 [2024-04-17 06:42:51.488025] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:47.080 06:42:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.080 06:42:51 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:47.080 06:42:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.080 06:42:51 -- common/autotest_common.sh@10 -- # set +x 00:16:47.080 task offset: 74624 on job bdev=Nvme0n1 fails 00:16:47.080 00:16:47.080 Latency(us) 00:16:47.080 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.080 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:47.080 Job: Nvme0n1 ended in about 0.41 seconds with error 00:16:47.080 Verification LBA range: start 0x0 length 0x400 00:16:47.080 Nvme0n1 : 0.41 1416.57 88.54 157.40 0.00 39528.32 2633.58 40001.23 00:16:47.080 =================================================================================================================== 00:16:47.080 Total : 1416.57 88.54 157.40 0.00 39528.32 2633.58 40001.23 00:16:47.080 [2024-04-17 06:42:51.489902] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:47.080 [2024-04-17 06:42:51.489932] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc4040 (9): Bad file descriptor 00:16:47.080 06:42:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.080 06:42:51 -- target/host_management.sh@87 -- # sleep 1 00:16:47.080 [2024-04-17 06:42:51.541899] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:48.012 06:42:52 -- target/host_management.sh@91 -- # kill -9 4165868 00:16:48.012 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4165868) - No such process 00:16:48.012 06:42:52 -- target/host_management.sh@91 -- # true 00:16:48.012 06:42:52 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:48.012 06:42:52 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:48.012 06:42:52 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:48.012 06:42:52 -- nvmf/common.sh@521 -- # config=() 00:16:48.012 06:42:52 -- nvmf/common.sh@521 -- # local subsystem config 00:16:48.012 06:42:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:48.012 06:42:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:48.012 { 00:16:48.012 "params": { 00:16:48.012 "name": "Nvme$subsystem", 00:16:48.012 "trtype": "$TEST_TRANSPORT", 00:16:48.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:48.012 "adrfam": "ipv4", 00:16:48.012 "trsvcid": "$NVMF_PORT", 00:16:48.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:48.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:48.012 "hdgst": ${hdgst:-false}, 00:16:48.012 "ddgst": ${ddgst:-false} 00:16:48.012 }, 00:16:48.012 "method": "bdev_nvme_attach_controller" 00:16:48.012 } 00:16:48.012 EOF 00:16:48.012 )") 00:16:48.012 06:42:52 -- nvmf/common.sh@543 -- # cat 00:16:48.012 06:42:52 -- nvmf/common.sh@545 -- # jq . 00:16:48.012 06:42:52 -- nvmf/common.sh@546 -- # IFS=, 00:16:48.012 06:42:52 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:48.012 "params": { 00:16:48.012 "name": "Nvme0", 00:16:48.012 "trtype": "tcp", 00:16:48.012 "traddr": "10.0.0.2", 00:16:48.012 "adrfam": "ipv4", 00:16:48.012 "trsvcid": "4420", 00:16:48.012 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:48.012 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:48.012 "hdgst": false, 00:16:48.012 "ddgst": false 00:16:48.013 }, 00:16:48.013 "method": "bdev_nvme_attach_controller" 00:16:48.013 }' 00:16:48.013 [2024-04-17 06:42:52.545790] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:16:48.013 [2024-04-17 06:42:52.545884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4166143 ] 00:16:48.013 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.013 [2024-04-17 06:42:52.608263] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.270 [2024-04-17 06:42:52.695496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.270 [2024-04-17 06:42:52.704306] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:16:48.527 Running I/O for 1 seconds... 00:16:49.460 00:16:49.460 Latency(us) 00:16:49.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.460 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:49.460 Verification LBA range: start 0x0 length 0x400 00:16:49.460 Nvme0n1 : 1.00 1531.43 95.71 0.00 0.00 41137.02 9563.40 34564.17 00:16:49.460 =================================================================================================================== 00:16:49.460 Total : 1531.43 95.71 0.00 0.00 41137.02 9563.40 34564.17 00:16:49.718 06:42:54 -- target/host_management.sh@102 -- # stoptarget 00:16:49.718 06:42:54 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:49.718 06:42:54 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:49.718 06:42:54 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:49.718 06:42:54 -- target/host_management.sh@40 -- # nvmftestfini 00:16:49.718 06:42:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:49.718 06:42:54 -- nvmf/common.sh@117 -- # sync 00:16:49.718 06:42:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:49.718 06:42:54 -- nvmf/common.sh@120 -- # set +e 00:16:49.718 06:42:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:49.718 06:42:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:49.718 rmmod nvme_tcp 00:16:49.718 rmmod nvme_fabrics 00:16:49.718 rmmod nvme_keyring 00:16:49.718 06:42:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:49.718 06:42:54 -- nvmf/common.sh@124 -- # set -e 00:16:49.718 06:42:54 -- nvmf/common.sh@125 -- # return 0 00:16:49.718 06:42:54 -- nvmf/common.sh@478 -- # '[' -n 4165815 ']' 00:16:49.718 06:42:54 -- nvmf/common.sh@479 -- # killprocess 4165815 00:16:49.718 06:42:54 -- common/autotest_common.sh@936 -- # '[' -z 4165815 ']' 00:16:49.718 06:42:54 -- common/autotest_common.sh@940 -- # kill -0 4165815 00:16:49.718 06:42:54 -- common/autotest_common.sh@941 -- # uname 00:16:49.718 06:42:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:49.718 06:42:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4165815 00:16:49.976 06:42:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:49.976 06:42:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:49.976 06:42:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4165815' 00:16:49.976 killing process with pid 4165815 00:16:49.976 06:42:54 -- common/autotest_common.sh@955 -- # kill 4165815 00:16:49.976 06:42:54 -- common/autotest_common.sh@960 -- # wait 4165815 00:16:49.976 [2024-04-17 06:42:54.579166] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:50.233 06:42:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:50.233 06:42:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:50.233 06:42:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:50.233 06:42:54 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:50.233 06:42:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:50.233 06:42:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.233 06:42:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.233 06:42:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.134 06:42:56 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:52.134 00:16:52.134 real 0m6.574s 00:16:52.134 user 0m19.412s 00:16:52.134 sys 0m1.217s 00:16:52.134 06:42:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:52.134 06:42:56 -- common/autotest_common.sh@10 -- # set +x 00:16:52.134 ************************************ 00:16:52.134 END TEST nvmf_host_management 00:16:52.134 ************************************ 00:16:52.134 06:42:56 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:52.134 00:16:52.134 real 0m8.846s 00:16:52.134 user 0m20.236s 00:16:52.134 sys 0m2.682s 00:16:52.134 06:42:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:52.134 06:42:56 -- common/autotest_common.sh@10 -- # set +x 00:16:52.134 ************************************ 00:16:52.134 END TEST nvmf_host_management 00:16:52.134 ************************************ 00:16:52.134 06:42:56 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:52.134 06:42:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:52.134 06:42:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:52.134 06:42:56 -- common/autotest_common.sh@10 -- # set +x 00:16:52.393 ************************************ 00:16:52.393 START TEST nvmf_lvol 00:16:52.393 ************************************ 00:16:52.393 06:42:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:52.393 * Looking for test storage... 00:16:52.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:52.393 06:42:56 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:52.393 06:42:56 -- nvmf/common.sh@7 -- # uname -s 00:16:52.393 06:42:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:52.393 06:42:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:52.393 06:42:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:52.393 06:42:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:52.393 06:42:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:52.393 06:42:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:52.393 06:42:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:52.393 06:42:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:52.393 06:42:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:52.393 06:42:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:52.393 06:42:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:52.393 06:42:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:52.393 06:42:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:52.393 06:42:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:52.393 06:42:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:52.393 06:42:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:52.393 06:42:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:52.393 06:42:56 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:52.393 06:42:56 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:52.393 06:42:56 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:52.393 06:42:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.393 06:42:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.393 06:42:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.393 06:42:56 -- paths/export.sh@5 -- # export PATH 00:16:52.393 06:42:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.394 06:42:56 -- nvmf/common.sh@47 -- # : 0 00:16:52.394 06:42:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:52.394 06:42:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:52.394 06:42:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:52.394 06:42:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:52.394 06:42:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:52.394 06:42:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:52.394 06:42:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:52.394 06:42:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:52.394 06:42:56 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:52.394 06:42:56 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:52.394 06:42:56 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:52.394 06:42:56 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:52.394 06:42:56 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:52.394 06:42:56 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:52.394 06:42:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:52.394 06:42:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:52.394 06:42:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:52.394 06:42:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:52.394 06:42:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:52.394 06:42:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:52.394 06:42:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:52.394 06:42:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.394 06:42:56 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:52.394 06:42:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:52.394 06:42:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:52.394 06:42:56 -- common/autotest_common.sh@10 -- # set +x 00:16:54.297 06:42:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:54.297 06:42:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:54.297 06:42:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:54.297 06:42:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:54.297 06:42:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:54.297 06:42:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:54.297 06:42:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:54.297 06:42:58 -- nvmf/common.sh@295 -- # net_devs=() 00:16:54.297 06:42:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:54.297 06:42:58 -- nvmf/common.sh@296 -- # e810=() 00:16:54.297 06:42:58 -- nvmf/common.sh@296 -- # local -ga e810 00:16:54.297 06:42:58 -- nvmf/common.sh@297 -- # x722=() 00:16:54.297 06:42:58 -- nvmf/common.sh@297 -- # local -ga x722 00:16:54.297 06:42:58 -- nvmf/common.sh@298 -- # mlx=() 00:16:54.297 06:42:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:54.297 06:42:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:54.297 06:42:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:54.297 06:42:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:54.297 06:42:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:54.297 06:42:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:54.297 06:42:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:54.297 06:42:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:54.297 06:42:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:54.297 06:42:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:54.297 06:42:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:54.297 06:42:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:54.297 06:42:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:54.297 06:42:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:54.297 06:42:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:54.297 06:42:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:54.297 06:42:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:54.297 06:42:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:54.297 06:42:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:54.297 06:42:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:54.297 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:54.297 06:42:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:54.297 06:42:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:54.297 06:42:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.297 06:42:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.297 06:42:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:54.297 06:42:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:54.297 06:42:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:54.297 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:54.297 06:42:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:54.297 06:42:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:54.297 06:42:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.297 06:42:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.297 06:42:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:54.297 06:42:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:54.297 06:42:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:54.297 06:42:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:54.297 06:42:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:54.297 06:42:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.297 06:42:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:54.297 06:42:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.297 06:42:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:54.297 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:54.297 06:42:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.297 06:42:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:54.297 06:42:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.297 06:42:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:54.297 06:42:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.297 06:42:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:54.297 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:54.297 06:42:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.297 06:42:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:54.297 06:42:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:54.297 06:42:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:54.297 06:42:58 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:54.297 06:42:58 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:54.297 06:42:58 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:54.297 06:42:58 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:54.297 06:42:58 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:54.297 06:42:58 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:54.297 06:42:58 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:54.297 06:42:58 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:54.297 06:42:58 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:54.297 06:42:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:54.297 06:42:58 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:54.297 06:42:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:54.297 06:42:58 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:54.297 06:42:58 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:54.297 06:42:58 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:54.298 06:42:58 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:54.298 06:42:58 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:54.298 06:42:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:54.298 06:42:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:54.298 06:42:58 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:54.298 06:42:58 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:54.298 06:42:58 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:54.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:54.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:16:54.298 00:16:54.298 --- 10.0.0.2 ping statistics --- 00:16:54.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.298 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:16:54.298 06:42:58 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:54.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:54.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:16:54.298 00:16:54.298 --- 10.0.0.1 ping statistics --- 00:16:54.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.298 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:16:54.298 06:42:58 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:54.298 06:42:58 -- nvmf/common.sh@411 -- # return 0 00:16:54.298 06:42:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:54.298 06:42:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:54.298 06:42:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:54.298 06:42:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:54.298 06:42:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:54.298 06:42:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:54.298 06:42:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:54.298 06:42:58 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:54.298 06:42:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:54.298 06:42:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:54.298 06:42:58 -- common/autotest_common.sh@10 -- # set +x 00:16:54.298 06:42:58 -- nvmf/common.sh@470 -- # nvmfpid=4168360 00:16:54.298 06:42:58 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:54.298 06:42:58 -- nvmf/common.sh@471 -- # waitforlisten 4168360 00:16:54.298 06:42:58 -- common/autotest_common.sh@817 -- # '[' -z 4168360 ']' 00:16:54.298 06:42:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.298 06:42:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:54.298 06:42:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.298 06:42:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:54.298 06:42:58 -- common/autotest_common.sh@10 -- # set +x 00:16:54.557 [2024-04-17 06:42:58.931477] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:16:54.557 [2024-04-17 06:42:58.931584] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.557 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.557 [2024-04-17 06:42:59.001066] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:54.557 [2024-04-17 06:42:59.089325] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:54.557 [2024-04-17 06:42:59.089391] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:54.557 [2024-04-17 06:42:59.089418] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:54.557 [2024-04-17 06:42:59.089433] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:54.557 [2024-04-17 06:42:59.089445] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:54.557 [2024-04-17 06:42:59.089533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.557 [2024-04-17 06:42:59.089613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:54.557 [2024-04-17 06:42:59.089616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.815 06:42:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:54.815 06:42:59 -- common/autotest_common.sh@850 -- # return 0 00:16:54.815 06:42:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:54.815 06:42:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:54.815 06:42:59 -- common/autotest_common.sh@10 -- # set +x 00:16:54.815 06:42:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:54.815 06:42:59 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:55.073 [2024-04-17 06:42:59.461184] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:55.073 06:42:59 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:55.330 06:42:59 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:55.330 06:42:59 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:55.587 06:43:00 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:55.587 06:43:00 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:55.845 06:43:00 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:56.103 06:43:00 -- target/nvmf_lvol.sh@29 -- # lvs=a8874180-2629-4a24-9d31-a67ccbd60252 00:16:56.103 06:43:00 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a8874180-2629-4a24-9d31-a67ccbd60252 lvol 20 00:16:56.361 06:43:00 -- target/nvmf_lvol.sh@32 -- # lvol=0fa4a35f-1c7e-4e5d-b683-8d78869aa639 00:16:56.361 06:43:00 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:56.619 06:43:01 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0fa4a35f-1c7e-4e5d-b683-8d78869aa639 00:16:56.877 06:43:01 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:56.877 [2024-04-17 06:43:01.465339] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.877 06:43:01 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:57.136 06:43:01 -- target/nvmf_lvol.sh@42 -- # perf_pid=4168663 00:16:57.136 06:43:01 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:57.136 06:43:01 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:57.394 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.328 06:43:02 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 0fa4a35f-1c7e-4e5d-b683-8d78869aa639 MY_SNAPSHOT 00:16:58.586 06:43:03 -- target/nvmf_lvol.sh@47 -- # snapshot=df381972-adb6-452e-8e01-c5e8a326970c 00:16:58.586 06:43:03 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 0fa4a35f-1c7e-4e5d-b683-8d78869aa639 30 00:16:58.844 06:43:03 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone df381972-adb6-452e-8e01-c5e8a326970c MY_CLONE 00:16:59.103 06:43:03 -- target/nvmf_lvol.sh@49 -- # clone=3f04c9ea-583a-4033-b5ba-7996b76d6b42 00:16:59.103 06:43:03 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 3f04c9ea-583a-4033-b5ba-7996b76d6b42 00:16:59.676 06:43:04 -- target/nvmf_lvol.sh@53 -- # wait 4168663 00:17:07.865 Initializing NVMe Controllers 00:17:07.865 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:07.865 Controller IO queue size 128, less than required. 00:17:07.865 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:07.865 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:07.865 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:07.865 Initialization complete. Launching workers. 00:17:07.865 ======================================================== 00:17:07.865 Latency(us) 00:17:07.865 Device Information : IOPS MiB/s Average min max 00:17:07.865 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10559.90 41.25 12127.07 1662.92 75553.66 00:17:07.865 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10438.90 40.78 12269.98 2086.83 80691.49 00:17:07.865 ======================================================== 00:17:07.865 Total : 20998.80 82.03 12198.11 1662.92 80691.49 00:17:07.865 00:17:07.865 06:43:12 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:08.122 06:43:12 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0fa4a35f-1c7e-4e5d-b683-8d78869aa639 00:17:08.381 06:43:12 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a8874180-2629-4a24-9d31-a67ccbd60252 00:17:08.381 06:43:12 -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:08.381 06:43:12 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:08.381 06:43:12 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:08.381 06:43:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:08.381 06:43:12 -- nvmf/common.sh@117 -- # sync 00:17:08.381 06:43:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:08.381 06:43:12 -- nvmf/common.sh@120 -- # set +e 00:17:08.381 06:43:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:08.381 06:43:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:08.639 rmmod nvme_tcp 00:17:08.639 rmmod nvme_fabrics 00:17:08.639 rmmod nvme_keyring 00:17:08.639 06:43:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:08.639 06:43:13 -- nvmf/common.sh@124 -- # set -e 00:17:08.639 06:43:13 -- nvmf/common.sh@125 -- # return 0 00:17:08.639 06:43:13 -- nvmf/common.sh@478 -- # '[' -n 4168360 ']' 00:17:08.639 06:43:13 -- nvmf/common.sh@479 -- # killprocess 4168360 00:17:08.639 06:43:13 -- common/autotest_common.sh@936 -- # '[' -z 4168360 ']' 00:17:08.639 06:43:13 -- common/autotest_common.sh@940 -- # kill -0 4168360 00:17:08.639 06:43:13 -- common/autotest_common.sh@941 -- # uname 00:17:08.639 06:43:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:08.639 06:43:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4168360 00:17:08.639 06:43:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:08.639 06:43:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:08.639 06:43:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4168360' 00:17:08.639 killing process with pid 4168360 00:17:08.639 06:43:13 -- common/autotest_common.sh@955 -- # kill 4168360 00:17:08.639 06:43:13 -- common/autotest_common.sh@960 -- # wait 4168360 00:17:08.898 06:43:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:08.898 06:43:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:08.898 06:43:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:08.898 06:43:13 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:08.898 06:43:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:08.898 06:43:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.898 06:43:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.898 06:43:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.817 06:43:15 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:10.817 00:17:10.817 real 0m18.597s 00:17:10.817 user 1m4.169s 00:17:10.817 sys 0m5.305s 00:17:10.817 06:43:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:10.817 06:43:15 -- common/autotest_common.sh@10 -- # set +x 00:17:10.817 ************************************ 00:17:10.817 END TEST nvmf_lvol 00:17:10.817 ************************************ 00:17:10.817 06:43:15 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:10.817 06:43:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:10.817 06:43:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:10.817 06:43:15 -- common/autotest_common.sh@10 -- # set +x 00:17:11.075 ************************************ 00:17:11.075 START TEST nvmf_lvs_grow 00:17:11.075 ************************************ 00:17:11.075 06:43:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:11.075 * Looking for test storage... 00:17:11.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:11.075 06:43:15 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:11.075 06:43:15 -- nvmf/common.sh@7 -- # uname -s 00:17:11.075 06:43:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:11.075 06:43:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:11.075 06:43:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:11.075 06:43:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:11.075 06:43:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:11.075 06:43:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:11.075 06:43:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:11.076 06:43:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:11.076 06:43:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:11.076 06:43:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:11.076 06:43:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:11.076 06:43:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:11.076 06:43:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:11.076 06:43:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:11.076 06:43:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:11.076 06:43:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:11.076 06:43:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:11.076 06:43:15 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:11.076 06:43:15 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:11.076 06:43:15 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:11.076 06:43:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.076 06:43:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.076 06:43:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.076 06:43:15 -- paths/export.sh@5 -- # export PATH 00:17:11.076 06:43:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.076 06:43:15 -- nvmf/common.sh@47 -- # : 0 00:17:11.076 06:43:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:11.076 06:43:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:11.076 06:43:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:11.076 06:43:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:11.076 06:43:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:11.076 06:43:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:11.076 06:43:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:11.076 06:43:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:11.076 06:43:15 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:11.076 06:43:15 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:11.076 06:43:15 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:17:11.076 06:43:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:11.076 06:43:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:11.076 06:43:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:11.076 06:43:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:11.076 06:43:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:11.076 06:43:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.076 06:43:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:11.076 06:43:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.076 06:43:15 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:11.076 06:43:15 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:11.076 06:43:15 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:11.076 06:43:15 -- common/autotest_common.sh@10 -- # set +x 00:17:12.979 06:43:17 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:12.979 06:43:17 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:12.979 06:43:17 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:12.979 06:43:17 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:12.979 06:43:17 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:12.979 06:43:17 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:12.979 06:43:17 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:12.979 06:43:17 -- nvmf/common.sh@295 -- # net_devs=() 00:17:12.979 06:43:17 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:12.979 06:43:17 -- nvmf/common.sh@296 -- # e810=() 00:17:12.979 06:43:17 -- nvmf/common.sh@296 -- # local -ga e810 00:17:12.979 06:43:17 -- nvmf/common.sh@297 -- # x722=() 00:17:12.979 06:43:17 -- nvmf/common.sh@297 -- # local -ga x722 00:17:12.979 06:43:17 -- nvmf/common.sh@298 -- # mlx=() 00:17:12.979 06:43:17 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:12.979 06:43:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:12.979 06:43:17 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:12.979 06:43:17 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:12.979 06:43:17 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:12.979 06:43:17 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:12.979 06:43:17 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:12.979 06:43:17 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:12.979 06:43:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:12.979 06:43:17 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:12.979 06:43:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:12.979 06:43:17 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:12.979 06:43:17 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:12.979 06:43:17 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:12.979 06:43:17 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:12.979 06:43:17 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:12.979 06:43:17 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:12.979 06:43:17 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:12.979 06:43:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:12.979 06:43:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:12.979 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:12.979 06:43:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:12.979 06:43:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:12.979 06:43:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:12.979 06:43:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:12.979 06:43:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:12.979 06:43:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:12.979 06:43:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:12.979 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:12.979 06:43:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:12.979 06:43:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:12.979 06:43:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:12.979 06:43:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:12.979 06:43:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:12.979 06:43:17 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:12.979 06:43:17 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:12.979 06:43:17 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:12.979 06:43:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:12.979 06:43:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:12.979 06:43:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:12.979 06:43:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:12.979 06:43:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:12.979 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:12.979 06:43:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:12.979 06:43:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:12.979 06:43:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:12.979 06:43:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:12.979 06:43:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:12.979 06:43:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:12.979 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:12.979 06:43:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:12.979 06:43:17 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:12.979 06:43:17 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:12.979 06:43:17 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:12.979 06:43:17 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:12.979 06:43:17 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:12.979 06:43:17 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:12.979 06:43:17 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:12.979 06:43:17 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:12.979 06:43:17 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:12.979 06:43:17 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:12.980 06:43:17 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:12.980 06:43:17 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:12.980 06:43:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:12.980 06:43:17 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:12.980 06:43:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:12.980 06:43:17 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:12.980 06:43:17 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:12.980 06:43:17 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:12.980 06:43:17 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:12.980 06:43:17 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:12.980 06:43:17 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:12.980 06:43:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:13.239 06:43:17 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:13.239 06:43:17 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:13.239 06:43:17 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:13.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:13.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:17:13.239 00:17:13.239 --- 10.0.0.2 ping statistics --- 00:17:13.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.239 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:17:13.239 06:43:17 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:13.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:13.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:17:13.239 00:17:13.239 --- 10.0.0.1 ping statistics --- 00:17:13.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.239 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:17:13.239 06:43:17 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:13.239 06:43:17 -- nvmf/common.sh@411 -- # return 0 00:17:13.239 06:43:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:13.239 06:43:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:13.239 06:43:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:13.239 06:43:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:13.239 06:43:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:13.239 06:43:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:13.239 06:43:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:13.239 06:43:17 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:17:13.239 06:43:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:13.239 06:43:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:13.239 06:43:17 -- common/autotest_common.sh@10 -- # set +x 00:17:13.239 06:43:17 -- nvmf/common.sh@470 -- # nvmfpid=4171929 00:17:13.239 06:43:17 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:13.239 06:43:17 -- nvmf/common.sh@471 -- # waitforlisten 4171929 00:17:13.239 06:43:17 -- common/autotest_common.sh@817 -- # '[' -z 4171929 ']' 00:17:13.239 06:43:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.239 06:43:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:13.239 06:43:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.239 06:43:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:13.239 06:43:17 -- common/autotest_common.sh@10 -- # set +x 00:17:13.239 [2024-04-17 06:43:17.711033] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:17:13.239 [2024-04-17 06:43:17.711101] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:13.239 EAL: No free 2048 kB hugepages reported on node 1 00:17:13.239 [2024-04-17 06:43:17.779376] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.497 [2024-04-17 06:43:17.868555] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:13.497 [2024-04-17 06:43:17.868612] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:13.497 [2024-04-17 06:43:17.868634] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:13.497 [2024-04-17 06:43:17.868646] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:13.497 [2024-04-17 06:43:17.868656] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:13.497 [2024-04-17 06:43:17.868696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.497 06:43:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:13.497 06:43:17 -- common/autotest_common.sh@850 -- # return 0 00:17:13.497 06:43:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:13.497 06:43:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:13.497 06:43:17 -- common/autotest_common.sh@10 -- # set +x 00:17:13.497 06:43:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.497 06:43:18 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:13.755 [2024-04-17 06:43:18.251876] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:13.755 06:43:18 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:17:13.755 06:43:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:13.755 06:43:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:13.755 06:43:18 -- common/autotest_common.sh@10 -- # set +x 00:17:14.013 ************************************ 00:17:14.013 START TEST lvs_grow_clean 00:17:14.013 ************************************ 00:17:14.013 06:43:18 -- common/autotest_common.sh@1111 -- # lvs_grow 00:17:14.013 06:43:18 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:14.013 06:43:18 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:14.013 06:43:18 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:14.013 06:43:18 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:14.013 06:43:18 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:14.013 06:43:18 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:14.013 06:43:18 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:14.013 06:43:18 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:14.013 06:43:18 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:14.271 06:43:18 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:14.271 06:43:18 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:14.528 06:43:18 -- target/nvmf_lvs_grow.sh@28 -- # lvs=4f24eca7-1c41-4b8f-a258-d17cf4e4c94c 00:17:14.528 06:43:18 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f24eca7-1c41-4b8f-a258-d17cf4e4c94c 00:17:14.528 06:43:18 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:14.786 06:43:19 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:14.786 06:43:19 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:14.786 06:43:19 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4f24eca7-1c41-4b8f-a258-d17cf4e4c94c lvol 150 00:17:15.044 06:43:19 -- target/nvmf_lvs_grow.sh@33 -- # lvol=38e90c53-aa35-4c2a-a4e1-88203f1d4087 00:17:15.044 06:43:19 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:15.044 06:43:19 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:15.301 [2024-04-17 06:43:19.680388] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:15.301 [2024-04-17 06:43:19.680496] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:15.301 true 00:17:15.301 06:43:19 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f24eca7-1c41-4b8f-a258-d17cf4e4c94c 00:17:15.301 06:43:19 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:15.558 06:43:19 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:15.558 06:43:19 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:15.816 06:43:20 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 38e90c53-aa35-4c2a-a4e1-88203f1d4087 00:17:16.074 06:43:20 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:16.331 [2024-04-17 06:43:20.715602] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.331 06:43:20 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:16.590 06:43:21 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4172373 00:17:16.590 06:43:21 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:16.590 06:43:21 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:16.590 06:43:21 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4172373 /var/tmp/bdevperf.sock 00:17:16.591 06:43:21 -- common/autotest_common.sh@817 -- # '[' -z 4172373 ']' 00:17:16.591 06:43:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:16.591 06:43:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:16.591 06:43:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:16.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:16.591 06:43:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:16.591 06:43:21 -- common/autotest_common.sh@10 -- # set +x 00:17:16.591 [2024-04-17 06:43:21.056392] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:17:16.591 [2024-04-17 06:43:21.056461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4172373 ] 00:17:16.591 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.591 [2024-04-17 06:43:21.123411] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.849 [2024-04-17 06:43:21.212518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.849 06:43:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:16.849 06:43:21 -- common/autotest_common.sh@850 -- # return 0 00:17:16.849 06:43:21 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:17.413 Nvme0n1 00:17:17.413 06:43:21 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:17.671 [ 00:17:17.671 { 00:17:17.671 "name": "Nvme0n1", 00:17:17.671 "aliases": [ 00:17:17.671 "38e90c53-aa35-4c2a-a4e1-88203f1d4087" 00:17:17.671 ], 00:17:17.671 "product_name": "NVMe disk", 00:17:17.671 "block_size": 4096, 00:17:17.671 "num_blocks": 38912, 00:17:17.671 "uuid": "38e90c53-aa35-4c2a-a4e1-88203f1d4087", 00:17:17.671 "assigned_rate_limits": { 00:17:17.671 "rw_ios_per_sec": 0, 00:17:17.671 "rw_mbytes_per_sec": 0, 00:17:17.671 "r_mbytes_per_sec": 0, 00:17:17.671 "w_mbytes_per_sec": 0 00:17:17.671 }, 00:17:17.671 "claimed": false, 00:17:17.671 "zoned": false, 00:17:17.671 "supported_io_types": { 00:17:17.671 "read": true, 00:17:17.671 "write": true, 00:17:17.671 "unmap": true, 00:17:17.671 "write_zeroes": true, 00:17:17.671 "flush": true, 00:17:17.671 "reset": true, 00:17:17.671 "compare": true, 00:17:17.671 "compare_and_write": true, 00:17:17.671 "abort": true, 00:17:17.671 "nvme_admin": true, 00:17:17.671 "nvme_io": true 00:17:17.671 }, 00:17:17.671 "memory_domains": [ 00:17:17.671 { 00:17:17.671 "dma_device_id": "system", 00:17:17.671 "dma_device_type": 1 00:17:17.671 } 00:17:17.671 ], 00:17:17.671 "driver_specific": { 00:17:17.671 "nvme": [ 00:17:17.671 { 00:17:17.671 "trid": { 00:17:17.671 "trtype": "TCP", 00:17:17.671 "adrfam": "IPv4", 00:17:17.671 "traddr": "10.0.0.2", 00:17:17.671 "trsvcid": "4420", 00:17:17.671 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:17.671 }, 00:17:17.671 "ctrlr_data": { 00:17:17.671 "cntlid": 1, 00:17:17.671 "vendor_id": "0x8086", 00:17:17.671 "model_number": "SPDK bdev Controller", 00:17:17.671 "serial_number": "SPDK0", 00:17:17.671 "firmware_revision": "24.05", 00:17:17.671 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:17.671 "oacs": { 00:17:17.671 "security": 0, 00:17:17.671 "format": 0, 00:17:17.671 "firmware": 0, 00:17:17.671 "ns_manage": 0 00:17:17.671 }, 00:17:17.671 "multi_ctrlr": true, 00:17:17.671 "ana_reporting": false 00:17:17.671 }, 00:17:17.671 "vs": { 00:17:17.671 "nvme_version": "1.3" 00:17:17.671 }, 00:17:17.671 "ns_data": { 00:17:17.671 "id": 1, 00:17:17.671 "can_share": true 00:17:17.671 } 00:17:17.671 } 00:17:17.671 ], 00:17:17.671 "mp_policy": "active_passive" 00:17:17.671 } 00:17:17.671 } 00:17:17.671 ] 00:17:17.671 06:43:22 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4172511 00:17:17.671 06:43:22 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:17.671 06:43:22 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:17.671 Running I/O for 10 seconds... 00:17:18.605 Latency(us) 00:17:18.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.605 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:18.605 Nvme0n1 : 1.00 14095.00 55.06 0.00 0.00 0.00 0.00 0.00 00:17:18.605 =================================================================================================================== 00:17:18.605 Total : 14095.00 55.06 0.00 0.00 0.00 0.00 0.00 00:17:18.605 00:17:19.572 06:43:24 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4f24eca7-1c41-4b8f-a258-d17cf4e4c94c 00:17:19.572 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.572 Nvme0n1 : 2.00 14151.50 55.28 0.00 0.00 0.00 0.00 0.00 00:17:19.572 =================================================================================================================== 00:17:19.572 Total : 14151.50 55.28 0.00 0.00 0.00 0.00 0.00 00:17:19.572 00:17:19.840 true 00:17:19.840 06:43:24 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f24eca7-1c41-4b8f-a258-d17cf4e4c94c 00:17:19.840 06:43:24 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:20.103 06:43:24 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:20.103 06:43:24 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:20.103 06:43:24 -- target/nvmf_lvs_grow.sh@65 -- # wait 4172511 00:17:20.669 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:20.669 Nvme0n1 : 3.00 14319.67 55.94 0.00 0.00 0.00 0.00 0.00 00:17:20.669 =================================================================================================================== 00:17:20.669 Total : 14319.67 55.94 0.00 0.00 0.00 0.00 0.00 00:17:20.669 00:17:21.601 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:21.601 Nvme0n1 : 4.00 14355.75 56.08 0.00 0.00 0.00 0.00 0.00 00:17:21.602 =================================================================================================================== 00:17:21.602 Total : 14355.75 56.08 0.00 0.00 0.00 0.00 0.00 00:17:21.602 00:17:22.975 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:22.975 Nvme0n1 : 5.00 14441.20 56.41 0.00 0.00 0.00 0.00 0.00 00:17:22.975 =================================================================================================================== 00:17:22.975 Total : 14441.20 56.41 0.00 0.00 0.00 0.00 0.00 00:17:22.975 00:17:23.909 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:23.909 Nvme0n1 : 6.00 14445.17 56.43 0.00 0.00 0.00 0.00 0.00 00:17:23.909 =================================================================================================================== 00:17:23.909 Total : 14445.17 56.43 0.00 0.00 0.00 0.00 0.00 00:17:23.909 00:17:24.843 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:24.843 Nvme0n1 : 7.00 14447.71 56.44 0.00 0.00 0.00 0.00 0.00 00:17:24.843 =================================================================================================================== 00:17:24.843 Total : 14447.71 56.44 0.00 0.00 0.00 0.00 0.00 00:17:24.843 00:17:25.777 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:25.777 Nvme0n1 : 8.00 14513.88 56.69 0.00 0.00 0.00 0.00 0.00 00:17:25.777 =================================================================================================================== 00:17:25.777 Total : 14513.88 56.69 0.00 0.00 0.00 0.00 0.00 00:17:25.777 00:17:26.712 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:26.712 Nvme0n1 : 9.00 14529.56 56.76 0.00 0.00 0.00 0.00 0.00 00:17:26.713 =================================================================================================================== 00:17:26.713 Total : 14529.56 56.76 0.00 0.00 0.00 0.00 0.00 00:17:26.713 00:17:27.647 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:27.647 Nvme0n1 : 10.00 14556.70 56.86 0.00 0.00 0.00 0.00 0.00 00:17:27.647 =================================================================================================================== 00:17:27.647 Total : 14556.70 56.86 0.00 0.00 0.00 0.00 0.00 00:17:27.647 00:17:27.647 00:17:27.647 Latency(us) 00:17:27.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.647 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:27.647 Nvme0n1 : 10.01 14559.56 56.87 0.00 0.00 8785.53 5000.15 16699.54 00:17:27.647 =================================================================================================================== 00:17:27.647 Total : 14559.56 56.87 0.00 0.00 8785.53 5000.15 16699.54 00:17:27.647 0 00:17:27.647 06:43:32 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4172373 00:17:27.647 06:43:32 -- common/autotest_common.sh@936 -- # '[' -z 4172373 ']' 00:17:27.647 06:43:32 -- common/autotest_common.sh@940 -- # kill -0 4172373 00:17:27.647 06:43:32 -- common/autotest_common.sh@941 -- # uname 00:17:27.647 06:43:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:27.647 06:43:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4172373 00:17:27.647 06:43:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:27.647 06:43:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:27.647 06:43:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4172373' 00:17:27.647 killing process with pid 4172373 00:17:27.647 06:43:32 -- common/autotest_common.sh@955 -- # kill 4172373 00:17:27.647 Received shutdown signal, test time was about 10.000000 seconds 00:17:27.647 00:17:27.647 Latency(us) 00:17:27.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.647 =================================================================================================================== 00:17:27.647 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:27.647 06:43:32 -- common/autotest_common.sh@960 -- # wait 4172373 00:17:27.905 06:43:32 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:28.163 06:43:32 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f24eca7-1c41-4b8f-a258-d17cf4e4c94c 00:17:28.163 06:43:32 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:28.421 06:43:32 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:28.421 06:43:32 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:17:28.421 06:43:32 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:28.679 [2024-04-17 06:43:33.206930] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:28.679 06:43:33 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f24eca7-1c41-4b8f-a258-d17cf4e4c94c 00:17:28.679 06:43:33 -- common/autotest_common.sh@638 -- # local es=0 00:17:28.679 06:43:33 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f24eca7-1c41-4b8f-a258-d17cf4e4c94c 00:17:28.679 06:43:33 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:28.679 06:43:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:28.679 06:43:33 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:28.679 06:43:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:28.679 06:43:33 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:28.679 06:43:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:28.679 06:43:33 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:28.679 06:43:33 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:28.679 06:43:33 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f24eca7-1c41-4b8f-a258-d17cf4e4c94c 00:17:28.937 request: 00:17:28.937 { 00:17:28.937 "uuid": "4f24eca7-1c41-4b8f-a258-d17cf4e4c94c", 00:17:28.937 "method": "bdev_lvol_get_lvstores", 00:17:28.937 "req_id": 1 00:17:28.937 } 00:17:28.937 Got JSON-RPC error response 00:17:28.937 response: 00:17:28.937 { 00:17:28.937 "code": -19, 00:17:28.937 "message": "No such device" 00:17:28.937 } 00:17:28.937 06:43:33 -- common/autotest_common.sh@641 -- # es=1 00:17:28.937 06:43:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:28.937 06:43:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:28.937 06:43:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:28.937 06:43:33 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:29.195 aio_bdev 00:17:29.195 06:43:33 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 38e90c53-aa35-4c2a-a4e1-88203f1d4087 00:17:29.195 06:43:33 -- common/autotest_common.sh@885 -- # local bdev_name=38e90c53-aa35-4c2a-a4e1-88203f1d4087 00:17:29.195 06:43:33 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:29.195 06:43:33 -- common/autotest_common.sh@887 -- # local i 00:17:29.195 06:43:33 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:29.195 06:43:33 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:29.195 06:43:33 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:29.453 06:43:33 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 38e90c53-aa35-4c2a-a4e1-88203f1d4087 -t 2000 00:17:29.712 [ 00:17:29.712 { 00:17:29.712 "name": "38e90c53-aa35-4c2a-a4e1-88203f1d4087", 00:17:29.712 "aliases": [ 00:17:29.712 "lvs/lvol" 00:17:29.712 ], 00:17:29.712 "product_name": "Logical Volume", 00:17:29.712 "block_size": 4096, 00:17:29.712 "num_blocks": 38912, 00:17:29.712 "uuid": "38e90c53-aa35-4c2a-a4e1-88203f1d4087", 00:17:29.712 "assigned_rate_limits": { 00:17:29.712 "rw_ios_per_sec": 0, 00:17:29.712 "rw_mbytes_per_sec": 0, 00:17:29.712 "r_mbytes_per_sec": 0, 00:17:29.712 "w_mbytes_per_sec": 0 00:17:29.712 }, 00:17:29.712 "claimed": false, 00:17:29.712 "zoned": false, 00:17:29.712 "supported_io_types": { 00:17:29.712 "read": true, 00:17:29.712 "write": true, 00:17:29.712 "unmap": true, 00:17:29.712 "write_zeroes": true, 00:17:29.712 "flush": false, 00:17:29.712 "reset": true, 00:17:29.712 "compare": false, 00:17:29.712 "compare_and_write": false, 00:17:29.712 "abort": false, 00:17:29.712 "nvme_admin": false, 00:17:29.712 "nvme_io": false 00:17:29.712 }, 00:17:29.712 "driver_specific": { 00:17:29.712 "lvol": { 00:17:29.712 "lvol_store_uuid": "4f24eca7-1c41-4b8f-a258-d17cf4e4c94c", 00:17:29.712 "base_bdev": "aio_bdev", 00:17:29.712 "thin_provision": false, 00:17:29.712 "snapshot": false, 00:17:29.712 "clone": false, 00:17:29.712 "esnap_clone": false 00:17:29.712 } 00:17:29.712 } 00:17:29.712 } 00:17:29.712 ] 00:17:29.712 06:43:34 -- common/autotest_common.sh@893 -- # return 0 00:17:29.712 06:43:34 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f24eca7-1c41-4b8f-a258-d17cf4e4c94c 00:17:29.712 06:43:34 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:29.969 06:43:34 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:29.969 06:43:34 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f24eca7-1c41-4b8f-a258-d17cf4e4c94c 00:17:29.969 06:43:34 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:30.227 06:43:34 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:30.227 06:43:34 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 38e90c53-aa35-4c2a-a4e1-88203f1d4087 00:17:30.485 06:43:34 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4f24eca7-1c41-4b8f-a258-d17cf4e4c94c 00:17:30.742 06:43:35 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:31.000 06:43:35 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:31.000 00:17:31.000 real 0m17.168s 00:17:31.000 user 0m16.595s 00:17:31.000 sys 0m1.914s 00:17:31.000 06:43:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:31.000 06:43:35 -- common/autotest_common.sh@10 -- # set +x 00:17:31.000 ************************************ 00:17:31.000 END TEST lvs_grow_clean 00:17:31.000 ************************************ 00:17:31.000 06:43:35 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:31.000 06:43:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:31.000 06:43:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:31.000 06:43:35 -- common/autotest_common.sh@10 -- # set +x 00:17:31.257 ************************************ 00:17:31.257 START TEST lvs_grow_dirty 00:17:31.257 ************************************ 00:17:31.257 06:43:35 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:17:31.257 06:43:35 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:31.257 06:43:35 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:31.257 06:43:35 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:31.257 06:43:35 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:31.257 06:43:35 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:31.257 06:43:35 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:31.257 06:43:35 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:31.257 06:43:35 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:31.257 06:43:35 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:31.515 06:43:35 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:31.515 06:43:35 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:31.773 06:43:36 -- target/nvmf_lvs_grow.sh@28 -- # lvs=077dec1f-d969-4e38-9195-2996825e0c5f 00:17:31.773 06:43:36 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 077dec1f-d969-4e38-9195-2996825e0c5f 00:17:31.773 06:43:36 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:32.031 06:43:36 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:32.031 06:43:36 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:32.031 06:43:36 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 077dec1f-d969-4e38-9195-2996825e0c5f lvol 150 00:17:32.288 06:43:36 -- target/nvmf_lvs_grow.sh@33 -- # lvol=65d7cd94-1b4f-4939-ad43-780c031bf070 00:17:32.288 06:43:36 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:32.289 06:43:36 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:32.547 [2024-04-17 06:43:36.903377] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:32.547 [2024-04-17 06:43:36.903478] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:32.547 true 00:17:32.547 06:43:36 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 077dec1f-d969-4e38-9195-2996825e0c5f 00:17:32.547 06:43:36 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:32.804 06:43:37 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:32.804 06:43:37 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:33.124 06:43:37 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 65d7cd94-1b4f-4939-ad43-780c031bf070 00:17:33.124 06:43:37 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:33.398 06:43:37 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:33.656 06:43:38 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4174502 00:17:33.656 06:43:38 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:33.656 06:43:38 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:33.656 06:43:38 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4174502 /var/tmp/bdevperf.sock 00:17:33.656 06:43:38 -- common/autotest_common.sh@817 -- # '[' -z 4174502 ']' 00:17:33.656 06:43:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:33.656 06:43:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:33.656 06:43:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:33.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:33.656 06:43:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:33.656 06:43:38 -- common/autotest_common.sh@10 -- # set +x 00:17:33.656 [2024-04-17 06:43:38.186235] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:17:33.656 [2024-04-17 06:43:38.186307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4174502 ] 00:17:33.656 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.656 [2024-04-17 06:43:38.248805] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.915 [2024-04-17 06:43:38.338108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.915 06:43:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:33.915 06:43:38 -- common/autotest_common.sh@850 -- # return 0 00:17:33.915 06:43:38 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:34.480 Nvme0n1 00:17:34.480 06:43:38 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:34.480 [ 00:17:34.480 { 00:17:34.480 "name": "Nvme0n1", 00:17:34.480 "aliases": [ 00:17:34.480 "65d7cd94-1b4f-4939-ad43-780c031bf070" 00:17:34.480 ], 00:17:34.480 "product_name": "NVMe disk", 00:17:34.480 "block_size": 4096, 00:17:34.480 "num_blocks": 38912, 00:17:34.480 "uuid": "65d7cd94-1b4f-4939-ad43-780c031bf070", 00:17:34.480 "assigned_rate_limits": { 00:17:34.480 "rw_ios_per_sec": 0, 00:17:34.480 "rw_mbytes_per_sec": 0, 00:17:34.480 "r_mbytes_per_sec": 0, 00:17:34.480 "w_mbytes_per_sec": 0 00:17:34.480 }, 00:17:34.480 "claimed": false, 00:17:34.480 "zoned": false, 00:17:34.480 "supported_io_types": { 00:17:34.480 "read": true, 00:17:34.480 "write": true, 00:17:34.480 "unmap": true, 00:17:34.480 "write_zeroes": true, 00:17:34.480 "flush": true, 00:17:34.480 "reset": true, 00:17:34.481 "compare": true, 00:17:34.481 "compare_and_write": true, 00:17:34.481 "abort": true, 00:17:34.481 "nvme_admin": true, 00:17:34.481 "nvme_io": true 00:17:34.481 }, 00:17:34.481 "memory_domains": [ 00:17:34.481 { 00:17:34.481 "dma_device_id": "system", 00:17:34.481 "dma_device_type": 1 00:17:34.481 } 00:17:34.481 ], 00:17:34.481 "driver_specific": { 00:17:34.481 "nvme": [ 00:17:34.481 { 00:17:34.481 "trid": { 00:17:34.481 "trtype": "TCP", 00:17:34.481 "adrfam": "IPv4", 00:17:34.481 "traddr": "10.0.0.2", 00:17:34.481 "trsvcid": "4420", 00:17:34.481 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:34.481 }, 00:17:34.481 "ctrlr_data": { 00:17:34.481 "cntlid": 1, 00:17:34.481 "vendor_id": "0x8086", 00:17:34.481 "model_number": "SPDK bdev Controller", 00:17:34.481 "serial_number": "SPDK0", 00:17:34.481 "firmware_revision": "24.05", 00:17:34.481 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:34.481 "oacs": { 00:17:34.481 "security": 0, 00:17:34.481 "format": 0, 00:17:34.481 "firmware": 0, 00:17:34.481 "ns_manage": 0 00:17:34.481 }, 00:17:34.481 "multi_ctrlr": true, 00:17:34.481 "ana_reporting": false 00:17:34.481 }, 00:17:34.481 "vs": { 00:17:34.481 "nvme_version": "1.3" 00:17:34.481 }, 00:17:34.481 "ns_data": { 00:17:34.481 "id": 1, 00:17:34.481 "can_share": true 00:17:34.481 } 00:17:34.481 } 00:17:34.481 ], 00:17:34.481 "mp_policy": "active_passive" 00:17:34.481 } 00:17:34.481 } 00:17:34.481 ] 00:17:34.481 06:43:39 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4174560 00:17:34.481 06:43:39 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:34.481 06:43:39 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:34.739 Running I/O for 10 seconds... 00:17:35.673 Latency(us) 00:17:35.673 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.673 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:35.673 Nvme0n1 : 1.00 14000.00 54.69 0.00 0.00 0.00 0.00 0.00 00:17:35.673 =================================================================================================================== 00:17:35.673 Total : 14000.00 54.69 0.00 0.00 0.00 0.00 0.00 00:17:35.673 00:17:36.614 06:43:41 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 077dec1f-d969-4e38-9195-2996825e0c5f 00:17:36.614 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:36.614 Nvme0n1 : 2.00 14253.50 55.68 0.00 0.00 0.00 0.00 0.00 00:17:36.614 =================================================================================================================== 00:17:36.614 Total : 14253.50 55.68 0.00 0.00 0.00 0.00 0.00 00:17:36.614 00:17:36.872 true 00:17:36.872 06:43:41 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 077dec1f-d969-4e38-9195-2996825e0c5f 00:17:36.872 06:43:41 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:37.130 06:43:41 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:37.130 06:43:41 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:37.130 06:43:41 -- target/nvmf_lvs_grow.sh@65 -- # wait 4174560 00:17:37.697 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:37.697 Nvme0n1 : 3.00 14332.00 55.98 0.00 0.00 0.00 0.00 0.00 00:17:37.697 =================================================================================================================== 00:17:37.697 Total : 14332.00 55.98 0.00 0.00 0.00 0.00 0.00 00:17:37.697 00:17:38.632 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:38.632 Nvme0n1 : 4.00 14435.75 56.39 0.00 0.00 0.00 0.00 0.00 00:17:38.632 =================================================================================================================== 00:17:38.632 Total : 14435.75 56.39 0.00 0.00 0.00 0.00 0.00 00:17:38.632 00:17:40.005 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:40.005 Nvme0n1 : 5.00 14511.40 56.69 0.00 0.00 0.00 0.00 0.00 00:17:40.005 =================================================================================================================== 00:17:40.005 Total : 14511.40 56.69 0.00 0.00 0.00 0.00 0.00 00:17:40.005 00:17:40.938 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:40.938 Nvme0n1 : 6.00 14556.50 56.86 0.00 0.00 0.00 0.00 0.00 00:17:40.938 =================================================================================================================== 00:17:40.938 Total : 14556.50 56.86 0.00 0.00 0.00 0.00 0.00 00:17:40.938 00:17:41.871 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:41.871 Nvme0n1 : 7.00 14639.57 57.19 0.00 0.00 0.00 0.00 0.00 00:17:41.871 =================================================================================================================== 00:17:41.871 Total : 14639.57 57.19 0.00 0.00 0.00 0.00 0.00 00:17:41.871 00:17:42.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:42.804 Nvme0n1 : 8.00 14657.50 57.26 0.00 0.00 0.00 0.00 0.00 00:17:42.804 =================================================================================================================== 00:17:42.804 Total : 14657.50 57.26 0.00 0.00 0.00 0.00 0.00 00:17:42.804 00:17:43.739 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:43.739 Nvme0n1 : 9.00 14669.22 57.30 0.00 0.00 0.00 0.00 0.00 00:17:43.739 =================================================================================================================== 00:17:43.739 Total : 14669.22 57.30 0.00 0.00 0.00 0.00 0.00 00:17:43.739 00:17:44.680 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:44.680 Nvme0n1 : 10.00 14719.00 57.50 0.00 0.00 0.00 0.00 0.00 00:17:44.680 =================================================================================================================== 00:17:44.680 Total : 14719.00 57.50 0.00 0.00 0.00 0.00 0.00 00:17:44.680 00:17:44.680 00:17:44.680 Latency(us) 00:17:44.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.680 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:44.680 Nvme0n1 : 10.01 14722.45 57.51 0.00 0.00 8689.14 2233.08 16699.54 00:17:44.680 =================================================================================================================== 00:17:44.680 Total : 14722.45 57.51 0.00 0.00 8689.14 2233.08 16699.54 00:17:44.680 0 00:17:44.680 06:43:49 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4174502 00:17:44.681 06:43:49 -- common/autotest_common.sh@936 -- # '[' -z 4174502 ']' 00:17:44.681 06:43:49 -- common/autotest_common.sh@940 -- # kill -0 4174502 00:17:44.681 06:43:49 -- common/autotest_common.sh@941 -- # uname 00:17:44.681 06:43:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:44.681 06:43:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4174502 00:17:44.681 06:43:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:44.681 06:43:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:44.681 06:43:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4174502' 00:17:44.681 killing process with pid 4174502 00:17:44.681 06:43:49 -- common/autotest_common.sh@955 -- # kill 4174502 00:17:44.681 Received shutdown signal, test time was about 10.000000 seconds 00:17:44.681 00:17:44.681 Latency(us) 00:17:44.681 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.681 =================================================================================================================== 00:17:44.681 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:44.681 06:43:49 -- common/autotest_common.sh@960 -- # wait 4174502 00:17:44.939 06:43:49 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:45.197 06:43:49 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 077dec1f-d969-4e38-9195-2996825e0c5f 00:17:45.197 06:43:49 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:45.455 06:43:50 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:45.455 06:43:50 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:17:45.455 06:43:50 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 4171929 00:17:45.455 06:43:50 -- target/nvmf_lvs_grow.sh@74 -- # wait 4171929 00:17:45.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 4171929 Killed "${NVMF_APP[@]}" "$@" 00:17:45.455 06:43:50 -- target/nvmf_lvs_grow.sh@74 -- # true 00:17:45.455 06:43:50 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:17:45.455 06:43:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:45.455 06:43:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:45.455 06:43:50 -- common/autotest_common.sh@10 -- # set +x 00:17:45.714 06:43:50 -- nvmf/common.sh@470 -- # nvmfpid=4175880 00:17:45.714 06:43:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:45.714 06:43:50 -- nvmf/common.sh@471 -- # waitforlisten 4175880 00:17:45.714 06:43:50 -- common/autotest_common.sh@817 -- # '[' -z 4175880 ']' 00:17:45.714 06:43:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.714 06:43:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:45.714 06:43:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.714 06:43:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:45.714 06:43:50 -- common/autotest_common.sh@10 -- # set +x 00:17:45.714 [2024-04-17 06:43:50.111616] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:17:45.714 [2024-04-17 06:43:50.111726] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.714 EAL: No free 2048 kB hugepages reported on node 1 00:17:45.714 [2024-04-17 06:43:50.183957] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.714 [2024-04-17 06:43:50.271433] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:45.714 [2024-04-17 06:43:50.271518] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:45.714 [2024-04-17 06:43:50.271556] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:45.714 [2024-04-17 06:43:50.271569] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:45.714 [2024-04-17 06:43:50.271578] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:45.714 [2024-04-17 06:43:50.271605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.972 06:43:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:45.972 06:43:50 -- common/autotest_common.sh@850 -- # return 0 00:17:45.972 06:43:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:45.972 06:43:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:45.972 06:43:50 -- common/autotest_common.sh@10 -- # set +x 00:17:45.972 06:43:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:45.972 06:43:50 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:46.230 [2024-04-17 06:43:50.680679] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:46.230 [2024-04-17 06:43:50.680852] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:46.230 [2024-04-17 06:43:50.680902] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:46.230 06:43:50 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:17:46.230 06:43:50 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 65d7cd94-1b4f-4939-ad43-780c031bf070 00:17:46.230 06:43:50 -- common/autotest_common.sh@885 -- # local bdev_name=65d7cd94-1b4f-4939-ad43-780c031bf070 00:17:46.230 06:43:50 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:46.230 06:43:50 -- common/autotest_common.sh@887 -- # local i 00:17:46.230 06:43:50 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:46.230 06:43:50 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:46.230 06:43:50 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:46.488 06:43:50 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 65d7cd94-1b4f-4939-ad43-780c031bf070 -t 2000 00:17:46.747 [ 00:17:46.747 { 00:17:46.747 "name": "65d7cd94-1b4f-4939-ad43-780c031bf070", 00:17:46.747 "aliases": [ 00:17:46.747 "lvs/lvol" 00:17:46.747 ], 00:17:46.747 "product_name": "Logical Volume", 00:17:46.747 "block_size": 4096, 00:17:46.747 "num_blocks": 38912, 00:17:46.747 "uuid": "65d7cd94-1b4f-4939-ad43-780c031bf070", 00:17:46.747 "assigned_rate_limits": { 00:17:46.747 "rw_ios_per_sec": 0, 00:17:46.747 "rw_mbytes_per_sec": 0, 00:17:46.747 "r_mbytes_per_sec": 0, 00:17:46.747 "w_mbytes_per_sec": 0 00:17:46.747 }, 00:17:46.747 "claimed": false, 00:17:46.747 "zoned": false, 00:17:46.747 "supported_io_types": { 00:17:46.747 "read": true, 00:17:46.747 "write": true, 00:17:46.747 "unmap": true, 00:17:46.747 "write_zeroes": true, 00:17:46.747 "flush": false, 00:17:46.747 "reset": true, 00:17:46.747 "compare": false, 00:17:46.747 "compare_and_write": false, 00:17:46.747 "abort": false, 00:17:46.747 "nvme_admin": false, 00:17:46.747 "nvme_io": false 00:17:46.747 }, 00:17:46.747 "driver_specific": { 00:17:46.747 "lvol": { 00:17:46.747 "lvol_store_uuid": "077dec1f-d969-4e38-9195-2996825e0c5f", 00:17:46.747 "base_bdev": "aio_bdev", 00:17:46.747 "thin_provision": false, 00:17:46.747 "snapshot": false, 00:17:46.747 "clone": false, 00:17:46.747 "esnap_clone": false 00:17:46.747 } 00:17:46.747 } 00:17:46.747 } 00:17:46.747 ] 00:17:46.747 06:43:51 -- common/autotest_common.sh@893 -- # return 0 00:17:46.747 06:43:51 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 077dec1f-d969-4e38-9195-2996825e0c5f 00:17:46.747 06:43:51 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:17:47.005 06:43:51 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:17:47.005 06:43:51 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 077dec1f-d969-4e38-9195-2996825e0c5f 00:17:47.005 06:43:51 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:17:47.263 06:43:51 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:17:47.263 06:43:51 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:47.551 [2024-04-17 06:43:51.961728] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:47.551 06:43:51 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 077dec1f-d969-4e38-9195-2996825e0c5f 00:17:47.551 06:43:51 -- common/autotest_common.sh@638 -- # local es=0 00:17:47.551 06:43:51 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 077dec1f-d969-4e38-9195-2996825e0c5f 00:17:47.551 06:43:51 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.551 06:43:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:47.551 06:43:51 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.551 06:43:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:47.551 06:43:51 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.551 06:43:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:47.551 06:43:51 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.551 06:43:51 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:47.551 06:43:51 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 077dec1f-d969-4e38-9195-2996825e0c5f 00:17:47.809 request: 00:17:47.809 { 00:17:47.809 "uuid": "077dec1f-d969-4e38-9195-2996825e0c5f", 00:17:47.809 "method": "bdev_lvol_get_lvstores", 00:17:47.809 "req_id": 1 00:17:47.809 } 00:17:47.809 Got JSON-RPC error response 00:17:47.809 response: 00:17:47.809 { 00:17:47.809 "code": -19, 00:17:47.809 "message": "No such device" 00:17:47.809 } 00:17:47.809 06:43:52 -- common/autotest_common.sh@641 -- # es=1 00:17:47.809 06:43:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:47.809 06:43:52 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:47.809 06:43:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:47.809 06:43:52 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:48.067 aio_bdev 00:17:48.067 06:43:52 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 65d7cd94-1b4f-4939-ad43-780c031bf070 00:17:48.067 06:43:52 -- common/autotest_common.sh@885 -- # local bdev_name=65d7cd94-1b4f-4939-ad43-780c031bf070 00:17:48.067 06:43:52 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:48.067 06:43:52 -- common/autotest_common.sh@887 -- # local i 00:17:48.067 06:43:52 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:48.067 06:43:52 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:48.067 06:43:52 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:48.325 06:43:52 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 65d7cd94-1b4f-4939-ad43-780c031bf070 -t 2000 00:17:48.583 [ 00:17:48.583 { 00:17:48.583 "name": "65d7cd94-1b4f-4939-ad43-780c031bf070", 00:17:48.583 "aliases": [ 00:17:48.583 "lvs/lvol" 00:17:48.583 ], 00:17:48.583 "product_name": "Logical Volume", 00:17:48.583 "block_size": 4096, 00:17:48.583 "num_blocks": 38912, 00:17:48.583 "uuid": "65d7cd94-1b4f-4939-ad43-780c031bf070", 00:17:48.583 "assigned_rate_limits": { 00:17:48.583 "rw_ios_per_sec": 0, 00:17:48.583 "rw_mbytes_per_sec": 0, 00:17:48.583 "r_mbytes_per_sec": 0, 00:17:48.583 "w_mbytes_per_sec": 0 00:17:48.583 }, 00:17:48.583 "claimed": false, 00:17:48.583 "zoned": false, 00:17:48.583 "supported_io_types": { 00:17:48.583 "read": true, 00:17:48.583 "write": true, 00:17:48.583 "unmap": true, 00:17:48.583 "write_zeroes": true, 00:17:48.583 "flush": false, 00:17:48.583 "reset": true, 00:17:48.583 "compare": false, 00:17:48.583 "compare_and_write": false, 00:17:48.583 "abort": false, 00:17:48.583 "nvme_admin": false, 00:17:48.583 "nvme_io": false 00:17:48.583 }, 00:17:48.583 "driver_specific": { 00:17:48.583 "lvol": { 00:17:48.583 "lvol_store_uuid": "077dec1f-d969-4e38-9195-2996825e0c5f", 00:17:48.583 "base_bdev": "aio_bdev", 00:17:48.583 "thin_provision": false, 00:17:48.583 "snapshot": false, 00:17:48.583 "clone": false, 00:17:48.583 "esnap_clone": false 00:17:48.583 } 00:17:48.583 } 00:17:48.583 } 00:17:48.583 ] 00:17:48.583 06:43:52 -- common/autotest_common.sh@893 -- # return 0 00:17:48.583 06:43:52 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 077dec1f-d969-4e38-9195-2996825e0c5f 00:17:48.583 06:43:52 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:48.841 06:43:53 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:48.841 06:43:53 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 077dec1f-d969-4e38-9195-2996825e0c5f 00:17:48.841 06:43:53 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:49.099 06:43:53 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:49.099 06:43:53 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 65d7cd94-1b4f-4939-ad43-780c031bf070 00:17:49.358 06:43:53 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 077dec1f-d969-4e38-9195-2996825e0c5f 00:17:49.616 06:43:53 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:49.616 06:43:54 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:49.874 00:17:49.874 real 0m18.577s 00:17:49.874 user 0m47.220s 00:17:49.874 sys 0m4.664s 00:17:49.874 06:43:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:49.874 06:43:54 -- common/autotest_common.sh@10 -- # set +x 00:17:49.874 ************************************ 00:17:49.874 END TEST lvs_grow_dirty 00:17:49.874 ************************************ 00:17:49.874 06:43:54 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:49.874 06:43:54 -- common/autotest_common.sh@794 -- # type=--id 00:17:49.874 06:43:54 -- common/autotest_common.sh@795 -- # id=0 00:17:49.874 06:43:54 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:17:49.874 06:43:54 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:49.874 06:43:54 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:17:49.874 06:43:54 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:17:49.874 06:43:54 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:17:49.874 06:43:54 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:49.874 nvmf_trace.0 00:17:49.874 06:43:54 -- common/autotest_common.sh@809 -- # return 0 00:17:49.874 06:43:54 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:49.874 06:43:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:49.874 06:43:54 -- nvmf/common.sh@117 -- # sync 00:17:49.874 06:43:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:49.874 06:43:54 -- nvmf/common.sh@120 -- # set +e 00:17:49.874 06:43:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:49.874 06:43:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:49.874 rmmod nvme_tcp 00:17:49.874 rmmod nvme_fabrics 00:17:49.874 rmmod nvme_keyring 00:17:49.874 06:43:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:49.874 06:43:54 -- nvmf/common.sh@124 -- # set -e 00:17:49.874 06:43:54 -- nvmf/common.sh@125 -- # return 0 00:17:49.874 06:43:54 -- nvmf/common.sh@478 -- # '[' -n 4175880 ']' 00:17:49.874 06:43:54 -- nvmf/common.sh@479 -- # killprocess 4175880 00:17:49.874 06:43:54 -- common/autotest_common.sh@936 -- # '[' -z 4175880 ']' 00:17:49.874 06:43:54 -- common/autotest_common.sh@940 -- # kill -0 4175880 00:17:49.874 06:43:54 -- common/autotest_common.sh@941 -- # uname 00:17:49.874 06:43:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:49.874 06:43:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4175880 00:17:49.874 06:43:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:49.874 06:43:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:49.874 06:43:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4175880' 00:17:49.874 killing process with pid 4175880 00:17:49.874 06:43:54 -- common/autotest_common.sh@955 -- # kill 4175880 00:17:49.874 06:43:54 -- common/autotest_common.sh@960 -- # wait 4175880 00:17:50.133 06:43:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:50.133 06:43:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:50.133 06:43:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:50.133 06:43:54 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:50.133 06:43:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:50.133 06:43:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.133 06:43:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.133 06:43:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.669 06:43:56 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:52.669 00:17:52.669 real 0m41.164s 00:17:52.669 user 1m9.444s 00:17:52.669 sys 0m8.524s 00:17:52.669 06:43:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:52.669 06:43:56 -- common/autotest_common.sh@10 -- # set +x 00:17:52.669 ************************************ 00:17:52.669 END TEST nvmf_lvs_grow 00:17:52.669 ************************************ 00:17:52.669 06:43:56 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:52.669 06:43:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:52.669 06:43:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:52.669 06:43:56 -- common/autotest_common.sh@10 -- # set +x 00:17:52.669 ************************************ 00:17:52.669 START TEST nvmf_bdev_io_wait 00:17:52.669 ************************************ 00:17:52.669 06:43:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:52.669 * Looking for test storage... 00:17:52.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:52.669 06:43:56 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:52.669 06:43:56 -- nvmf/common.sh@7 -- # uname -s 00:17:52.669 06:43:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.669 06:43:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.669 06:43:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.669 06:43:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.669 06:43:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.669 06:43:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.669 06:43:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.669 06:43:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.669 06:43:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.669 06:43:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.669 06:43:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:52.669 06:43:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:52.669 06:43:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.669 06:43:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.669 06:43:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:52.669 06:43:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.669 06:43:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:52.669 06:43:56 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.669 06:43:56 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.669 06:43:56 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.669 06:43:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.669 06:43:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.669 06:43:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.669 06:43:56 -- paths/export.sh@5 -- # export PATH 00:17:52.669 06:43:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.669 06:43:56 -- nvmf/common.sh@47 -- # : 0 00:17:52.669 06:43:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:52.669 06:43:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:52.669 06:43:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.669 06:43:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.669 06:43:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.669 06:43:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:52.669 06:43:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:52.669 06:43:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:52.669 06:43:56 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:52.669 06:43:56 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:52.669 06:43:56 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:52.669 06:43:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:52.669 06:43:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:52.669 06:43:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:52.669 06:43:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:52.669 06:43:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:52.669 06:43:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.669 06:43:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:52.669 06:43:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.669 06:43:56 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:52.669 06:43:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:52.669 06:43:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:52.669 06:43:56 -- common/autotest_common.sh@10 -- # set +x 00:17:54.575 06:43:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:54.575 06:43:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:54.575 06:43:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:54.575 06:43:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:54.575 06:43:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:54.575 06:43:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:54.575 06:43:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:54.575 06:43:58 -- nvmf/common.sh@295 -- # net_devs=() 00:17:54.575 06:43:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:54.575 06:43:58 -- nvmf/common.sh@296 -- # e810=() 00:17:54.575 06:43:58 -- nvmf/common.sh@296 -- # local -ga e810 00:17:54.575 06:43:58 -- nvmf/common.sh@297 -- # x722=() 00:17:54.575 06:43:58 -- nvmf/common.sh@297 -- # local -ga x722 00:17:54.575 06:43:58 -- nvmf/common.sh@298 -- # mlx=() 00:17:54.575 06:43:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:54.575 06:43:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:54.575 06:43:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:54.575 06:43:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:54.575 06:43:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:54.575 06:43:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:54.575 06:43:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:54.575 06:43:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:54.575 06:43:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:54.575 06:43:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:54.575 06:43:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:54.575 06:43:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:54.575 06:43:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:54.575 06:43:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:54.575 06:43:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:54.575 06:43:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:54.575 06:43:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:54.575 06:43:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:54.576 06:43:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:54.576 06:43:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:54.576 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:54.576 06:43:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:54.576 06:43:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:54.576 06:43:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:54.576 06:43:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:54.576 06:43:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:54.576 06:43:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:54.576 06:43:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:54.576 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:54.576 06:43:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:54.576 06:43:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:54.576 06:43:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:54.576 06:43:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:54.576 06:43:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:54.576 06:43:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:54.576 06:43:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:54.576 06:43:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:54.576 06:43:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:54.576 06:43:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:54.576 06:43:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:54.576 06:43:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:54.576 06:43:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:54.576 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:54.576 06:43:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:54.576 06:43:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:54.576 06:43:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:54.576 06:43:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:54.576 06:43:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:54.576 06:43:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:54.576 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:54.576 06:43:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:54.576 06:43:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:54.576 06:43:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:54.576 06:43:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:54.576 06:43:58 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:54.576 06:43:58 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:54.576 06:43:58 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:54.576 06:43:58 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:54.576 06:43:58 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:54.576 06:43:58 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:54.576 06:43:58 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:54.576 06:43:58 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:54.576 06:43:58 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:54.576 06:43:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:54.576 06:43:58 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:54.576 06:43:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:54.576 06:43:58 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:54.576 06:43:58 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:54.576 06:43:58 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:54.576 06:43:58 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:54.576 06:43:58 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:54.576 06:43:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:54.576 06:43:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:54.576 06:43:58 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:54.576 06:43:58 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:54.576 06:43:58 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:54.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:54.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:17:54.576 00:17:54.576 --- 10.0.0.2 ping statistics --- 00:17:54.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.576 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:17:54.576 06:43:58 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:54.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:54.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:17:54.576 00:17:54.576 --- 10.0.0.1 ping statistics --- 00:17:54.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.576 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:17:54.576 06:43:58 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:54.576 06:43:58 -- nvmf/common.sh@411 -- # return 0 00:17:54.576 06:43:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:54.576 06:43:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:54.576 06:43:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:54.576 06:43:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:54.576 06:43:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:54.576 06:43:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:54.576 06:43:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:54.576 06:43:58 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:54.576 06:43:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:54.576 06:43:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:54.576 06:43:58 -- common/autotest_common.sh@10 -- # set +x 00:17:54.576 06:43:58 -- nvmf/common.sh@470 -- # nvmfpid=4178410 00:17:54.576 06:43:58 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:54.576 06:43:58 -- nvmf/common.sh@471 -- # waitforlisten 4178410 00:17:54.576 06:43:58 -- common/autotest_common.sh@817 -- # '[' -z 4178410 ']' 00:17:54.576 06:43:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.576 06:43:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:54.576 06:43:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.576 06:43:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:54.576 06:43:58 -- common/autotest_common.sh@10 -- # set +x 00:17:54.576 [2024-04-17 06:43:59.018370] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:17:54.576 [2024-04-17 06:43:59.018448] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.576 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.576 [2024-04-17 06:43:59.084895] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:54.576 [2024-04-17 06:43:59.174933] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.576 [2024-04-17 06:43:59.174987] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.576 [2024-04-17 06:43:59.175001] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:54.576 [2024-04-17 06:43:59.175013] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:54.576 [2024-04-17 06:43:59.175025] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.576 [2024-04-17 06:43:59.175081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.576 [2024-04-17 06:43:59.175140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:54.576 [2024-04-17 06:43:59.175170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:54.576 [2024-04-17 06:43:59.175171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.835 06:43:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:54.835 06:43:59 -- common/autotest_common.sh@850 -- # return 0 00:17:54.835 06:43:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:54.835 06:43:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:54.835 06:43:59 -- common/autotest_common.sh@10 -- # set +x 00:17:54.835 06:43:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:54.835 06:43:59 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:54.835 06:43:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:54.835 06:43:59 -- common/autotest_common.sh@10 -- # set +x 00:17:54.835 06:43:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:54.835 06:43:59 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:54.835 06:43:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:54.835 06:43:59 -- common/autotest_common.sh@10 -- # set +x 00:17:54.835 06:43:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:54.835 06:43:59 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:54.835 06:43:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:54.835 06:43:59 -- common/autotest_common.sh@10 -- # set +x 00:17:54.835 [2024-04-17 06:43:59.350777] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.835 06:43:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:54.835 06:43:59 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:54.835 06:43:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:54.835 06:43:59 -- common/autotest_common.sh@10 -- # set +x 00:17:54.835 Malloc0 00:17:54.835 06:43:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:54.835 06:43:59 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:54.835 06:43:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:54.835 06:43:59 -- common/autotest_common.sh@10 -- # set +x 00:17:54.835 06:43:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:54.835 06:43:59 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:54.835 06:43:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:54.835 06:43:59 -- common/autotest_common.sh@10 -- # set +x 00:17:54.835 06:43:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:54.835 06:43:59 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:54.835 06:43:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:54.835 06:43:59 -- common/autotest_common.sh@10 -- # set +x 00:17:54.835 [2024-04-17 06:43:59.411694] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.835 06:43:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:54.835 06:43:59 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4178437 00:17:54.835 06:43:59 -- target/bdev_io_wait.sh@30 -- # READ_PID=4178439 00:17:54.836 06:43:59 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:54.836 06:43:59 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:54.836 06:43:59 -- nvmf/common.sh@521 -- # config=() 00:17:54.836 06:43:59 -- nvmf/common.sh@521 -- # local subsystem config 00:17:54.836 06:43:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:54.836 06:43:59 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4178441 00:17:54.836 06:43:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:54.836 { 00:17:54.836 "params": { 00:17:54.836 "name": "Nvme$subsystem", 00:17:54.836 "trtype": "$TEST_TRANSPORT", 00:17:54.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:54.836 "adrfam": "ipv4", 00:17:54.836 "trsvcid": "$NVMF_PORT", 00:17:54.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:54.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:54.836 "hdgst": ${hdgst:-false}, 00:17:54.836 "ddgst": ${ddgst:-false} 00:17:54.836 }, 00:17:54.836 "method": "bdev_nvme_attach_controller" 00:17:54.836 } 00:17:54.836 EOF 00:17:54.836 )") 00:17:54.836 06:43:59 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:54.836 06:43:59 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:54.836 06:43:59 -- nvmf/common.sh@521 -- # config=() 00:17:54.836 06:43:59 -- nvmf/common.sh@521 -- # local subsystem config 00:17:54.836 06:43:59 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4178443 00:17:54.836 06:43:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:54.836 06:43:59 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:54.836 06:43:59 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:54.836 06:43:59 -- target/bdev_io_wait.sh@35 -- # sync 00:17:54.836 06:43:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:54.836 { 00:17:54.836 "params": { 00:17:54.836 "name": "Nvme$subsystem", 00:17:54.836 "trtype": "$TEST_TRANSPORT", 00:17:54.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:54.836 "adrfam": "ipv4", 00:17:54.836 "trsvcid": "$NVMF_PORT", 00:17:54.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:54.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:54.836 "hdgst": ${hdgst:-false}, 00:17:54.836 "ddgst": ${ddgst:-false} 00:17:54.836 }, 00:17:54.836 "method": "bdev_nvme_attach_controller" 00:17:54.836 } 00:17:54.836 EOF 00:17:54.836 )") 00:17:54.836 06:43:59 -- nvmf/common.sh@521 -- # config=() 00:17:54.836 06:43:59 -- nvmf/common.sh@521 -- # local subsystem config 00:17:54.836 06:43:59 -- nvmf/common.sh@543 -- # cat 00:17:54.836 06:43:59 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:54.836 06:43:59 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:54.836 06:43:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:54.836 06:43:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:54.836 { 00:17:54.836 "params": { 00:17:54.836 "name": "Nvme$subsystem", 00:17:54.836 "trtype": "$TEST_TRANSPORT", 00:17:54.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:54.836 "adrfam": "ipv4", 00:17:54.836 "trsvcid": "$NVMF_PORT", 00:17:54.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:54.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:54.836 "hdgst": ${hdgst:-false}, 00:17:54.836 "ddgst": ${ddgst:-false} 00:17:54.836 }, 00:17:54.836 "method": "bdev_nvme_attach_controller" 00:17:54.836 } 00:17:54.836 EOF 00:17:54.836 )") 00:17:54.836 06:43:59 -- nvmf/common.sh@521 -- # config=() 00:17:54.836 06:43:59 -- nvmf/common.sh@521 -- # local subsystem config 00:17:54.836 06:43:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:54.836 06:43:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:54.836 { 00:17:54.836 "params": { 00:17:54.836 "name": "Nvme$subsystem", 00:17:54.836 "trtype": "$TEST_TRANSPORT", 00:17:54.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:54.836 "adrfam": "ipv4", 00:17:54.836 "trsvcid": "$NVMF_PORT", 00:17:54.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:54.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:54.836 "hdgst": ${hdgst:-false}, 00:17:54.836 "ddgst": ${ddgst:-false} 00:17:54.836 }, 00:17:54.836 "method": "bdev_nvme_attach_controller" 00:17:54.836 } 00:17:54.836 EOF 00:17:54.836 )") 00:17:54.836 06:43:59 -- nvmf/common.sh@543 -- # cat 00:17:54.836 06:43:59 -- nvmf/common.sh@543 -- # cat 00:17:54.836 06:43:59 -- target/bdev_io_wait.sh@37 -- # wait 4178437 00:17:54.836 06:43:59 -- nvmf/common.sh@543 -- # cat 00:17:54.836 06:43:59 -- nvmf/common.sh@545 -- # jq . 00:17:54.836 06:43:59 -- nvmf/common.sh@545 -- # jq . 00:17:54.836 06:43:59 -- nvmf/common.sh@545 -- # jq . 00:17:54.836 06:43:59 -- nvmf/common.sh@546 -- # IFS=, 00:17:54.836 06:43:59 -- nvmf/common.sh@545 -- # jq . 00:17:54.836 06:43:59 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:54.836 "params": { 00:17:54.836 "name": "Nvme1", 00:17:54.836 "trtype": "tcp", 00:17:54.836 "traddr": "10.0.0.2", 00:17:54.836 "adrfam": "ipv4", 00:17:54.836 "trsvcid": "4420", 00:17:54.836 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:54.836 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:54.836 "hdgst": false, 00:17:54.836 "ddgst": false 00:17:54.836 }, 00:17:54.836 "method": "bdev_nvme_attach_controller" 00:17:54.836 }' 00:17:54.836 06:43:59 -- nvmf/common.sh@546 -- # IFS=, 00:17:54.836 06:43:59 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:54.836 "params": { 00:17:54.836 "name": "Nvme1", 00:17:54.836 "trtype": "tcp", 00:17:54.836 "traddr": "10.0.0.2", 00:17:54.836 "adrfam": "ipv4", 00:17:54.836 "trsvcid": "4420", 00:17:54.836 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:54.836 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:54.836 "hdgst": false, 00:17:54.836 "ddgst": false 00:17:54.836 }, 00:17:54.836 "method": "bdev_nvme_attach_controller" 00:17:54.836 }' 00:17:54.836 06:43:59 -- nvmf/common.sh@546 -- # IFS=, 00:17:54.836 06:43:59 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:54.836 "params": { 00:17:54.836 "name": "Nvme1", 00:17:54.836 "trtype": "tcp", 00:17:54.836 "traddr": "10.0.0.2", 00:17:54.836 "adrfam": "ipv4", 00:17:54.836 "trsvcid": "4420", 00:17:54.836 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:54.836 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:54.836 "hdgst": false, 00:17:54.836 "ddgst": false 00:17:54.836 }, 00:17:54.836 "method": "bdev_nvme_attach_controller" 00:17:54.836 }' 00:17:54.836 06:43:59 -- nvmf/common.sh@546 -- # IFS=, 00:17:54.836 06:43:59 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:54.836 "params": { 00:17:54.836 "name": "Nvme1", 00:17:54.836 "trtype": "tcp", 00:17:54.836 "traddr": "10.0.0.2", 00:17:54.836 "adrfam": "ipv4", 00:17:54.836 "trsvcid": "4420", 00:17:54.836 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:54.836 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:54.836 "hdgst": false, 00:17:54.836 "ddgst": false 00:17:54.836 }, 00:17:54.836 "method": "bdev_nvme_attach_controller" 00:17:54.836 }' 00:17:55.095 [2024-04-17 06:43:59.458858] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:17:55.095 [2024-04-17 06:43:59.458858] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:17:55.095 [2024-04-17 06:43:59.458858] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:17:55.095 [2024-04-17 06:43:59.458861] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:17:55.095 [2024-04-17 06:43:59.458940] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-17 06:43:59.458940] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-17 06:43:59.458940] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:55.095 [2024-04-17 06:43:59.458945] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:55.095 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:55.095 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:55.095 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.095 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.095 [2024-04-17 06:43:59.631542] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.353 [2024-04-17 06:43:59.708127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:55.353 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.353 [2024-04-17 06:43:59.716894] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:17:55.353 [2024-04-17 06:43:59.736572] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.353 [2024-04-17 06:43:59.811501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:55.353 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.353 [2024-04-17 06:43:59.820294] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:17:55.353 [2024-04-17 06:43:59.859658] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.353 [2024-04-17 06:43:59.916005] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.353 [2024-04-17 06:43:59.933141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:55.353 [2024-04-17 06:43:59.941941] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:17:55.611 [2024-04-17 06:43:59.982128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:55.611 [2024-04-17 06:43:59.990877] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:17:55.611 Running I/O for 1 seconds... 00:17:55.611 Running I/O for 1 seconds... 00:17:55.869 Running I/O for 1 seconds... 00:17:55.870 Running I/O for 1 seconds... 00:17:56.804 00:17:56.804 Latency(us) 00:17:56.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.804 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:56.804 Nvme1n1 : 1.02 7542.84 29.46 0.00 0.00 16828.11 9320.68 26991.12 00:17:56.804 =================================================================================================================== 00:17:56.804 Total : 7542.84 29.46 0.00 0.00 16828.11 9320.68 26991.12 00:17:56.804 00:17:56.804 Latency(us) 00:17:56.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.804 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:56.804 Nvme1n1 : 1.01 4532.91 17.71 0.00 0.00 28023.07 13592.65 42525.58 00:17:56.804 =================================================================================================================== 00:17:56.804 Total : 4532.91 17.71 0.00 0.00 28023.07 13592.65 42525.58 00:17:56.804 00:17:56.804 Latency(us) 00:17:56.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.804 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:56.804 Nvme1n1 : 1.00 7854.78 30.68 0.00 0.00 16242.71 4733.16 36117.62 00:17:56.804 =================================================================================================================== 00:17:56.804 Total : 7854.78 30.68 0.00 0.00 16242.71 4733.16 36117.62 00:17:56.804 00:17:56.804 Latency(us) 00:17:56.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.804 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:56.804 Nvme1n1 : 1.00 194219.99 758.67 0.00 0.00 656.52 250.31 885.95 00:17:56.804 =================================================================================================================== 00:17:56.804 Total : 194219.99 758.67 0.00 0.00 656.52 250.31 885.95 00:17:57.062 06:44:01 -- target/bdev_io_wait.sh@38 -- # wait 4178439 00:17:57.062 06:44:01 -- target/bdev_io_wait.sh@39 -- # wait 4178441 00:17:57.062 06:44:01 -- target/bdev_io_wait.sh@40 -- # wait 4178443 00:17:57.062 06:44:01 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:57.062 06:44:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:57.062 06:44:01 -- common/autotest_common.sh@10 -- # set +x 00:17:57.062 06:44:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:57.062 06:44:01 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:57.063 06:44:01 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:57.063 06:44:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:57.063 06:44:01 -- nvmf/common.sh@117 -- # sync 00:17:57.063 06:44:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:57.063 06:44:01 -- nvmf/common.sh@120 -- # set +e 00:17:57.063 06:44:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:57.063 06:44:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:57.063 rmmod nvme_tcp 00:17:57.063 rmmod nvme_fabrics 00:17:57.063 rmmod nvme_keyring 00:17:57.063 06:44:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:57.063 06:44:01 -- nvmf/common.sh@124 -- # set -e 00:17:57.063 06:44:01 -- nvmf/common.sh@125 -- # return 0 00:17:57.063 06:44:01 -- nvmf/common.sh@478 -- # '[' -n 4178410 ']' 00:17:57.063 06:44:01 -- nvmf/common.sh@479 -- # killprocess 4178410 00:17:57.063 06:44:01 -- common/autotest_common.sh@936 -- # '[' -z 4178410 ']' 00:17:57.063 06:44:01 -- common/autotest_common.sh@940 -- # kill -0 4178410 00:17:57.063 06:44:01 -- common/autotest_common.sh@941 -- # uname 00:17:57.063 06:44:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:57.063 06:44:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4178410 00:17:57.063 06:44:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:57.063 06:44:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:57.063 06:44:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4178410' 00:17:57.063 killing process with pid 4178410 00:17:57.063 06:44:01 -- common/autotest_common.sh@955 -- # kill 4178410 00:17:57.063 06:44:01 -- common/autotest_common.sh@960 -- # wait 4178410 00:17:57.321 06:44:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:57.321 06:44:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:57.321 06:44:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:57.321 06:44:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:57.321 06:44:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:57.321 06:44:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.321 06:44:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:57.321 06:44:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.858 06:44:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:59.858 00:17:59.858 real 0m7.119s 00:17:59.858 user 0m15.088s 00:17:59.858 sys 0m3.858s 00:17:59.858 06:44:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:59.858 06:44:03 -- common/autotest_common.sh@10 -- # set +x 00:17:59.858 ************************************ 00:17:59.858 END TEST nvmf_bdev_io_wait 00:17:59.858 ************************************ 00:17:59.858 06:44:03 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:59.858 06:44:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:59.858 06:44:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:59.858 06:44:03 -- common/autotest_common.sh@10 -- # set +x 00:17:59.858 ************************************ 00:17:59.858 START TEST nvmf_queue_depth 00:17:59.858 ************************************ 00:17:59.858 06:44:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:59.858 * Looking for test storage... 00:17:59.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:59.858 06:44:04 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:59.858 06:44:04 -- nvmf/common.sh@7 -- # uname -s 00:17:59.858 06:44:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:59.858 06:44:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:59.858 06:44:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:59.858 06:44:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:59.858 06:44:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:59.858 06:44:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:59.858 06:44:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:59.858 06:44:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:59.858 06:44:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:59.858 06:44:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:59.858 06:44:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:59.858 06:44:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:59.858 06:44:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:59.858 06:44:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:59.858 06:44:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:59.858 06:44:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:59.858 06:44:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:59.858 06:44:04 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:59.858 06:44:04 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:59.858 06:44:04 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:59.858 06:44:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.858 06:44:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.858 06:44:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.858 06:44:04 -- paths/export.sh@5 -- # export PATH 00:17:59.859 06:44:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.859 06:44:04 -- nvmf/common.sh@47 -- # : 0 00:17:59.859 06:44:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:59.859 06:44:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:59.859 06:44:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:59.859 06:44:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:59.859 06:44:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:59.859 06:44:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:59.859 06:44:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:59.859 06:44:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:59.859 06:44:04 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:59.859 06:44:04 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:59.859 06:44:04 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:59.859 06:44:04 -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:59.859 06:44:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:59.859 06:44:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:59.859 06:44:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:59.859 06:44:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:59.859 06:44:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:59.859 06:44:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.859 06:44:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:59.859 06:44:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.859 06:44:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:59.859 06:44:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:59.859 06:44:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:59.859 06:44:04 -- common/autotest_common.sh@10 -- # set +x 00:18:01.760 06:44:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:01.760 06:44:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:01.760 06:44:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:01.760 06:44:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:01.760 06:44:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:01.760 06:44:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:01.760 06:44:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:01.760 06:44:06 -- nvmf/common.sh@295 -- # net_devs=() 00:18:01.760 06:44:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:01.760 06:44:06 -- nvmf/common.sh@296 -- # e810=() 00:18:01.760 06:44:06 -- nvmf/common.sh@296 -- # local -ga e810 00:18:01.760 06:44:06 -- nvmf/common.sh@297 -- # x722=() 00:18:01.760 06:44:06 -- nvmf/common.sh@297 -- # local -ga x722 00:18:01.760 06:44:06 -- nvmf/common.sh@298 -- # mlx=() 00:18:01.760 06:44:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:01.760 06:44:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:01.760 06:44:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:01.760 06:44:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:01.760 06:44:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:01.760 06:44:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:01.760 06:44:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:01.760 06:44:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:01.760 06:44:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:01.760 06:44:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:01.760 06:44:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:01.760 06:44:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:01.760 06:44:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:01.760 06:44:06 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:01.760 06:44:06 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:01.760 06:44:06 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:01.760 06:44:06 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:01.760 06:44:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:01.760 06:44:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:01.760 06:44:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:01.760 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:01.760 06:44:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:01.760 06:44:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:01.760 06:44:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:01.760 06:44:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:01.760 06:44:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:01.760 06:44:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:01.760 06:44:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:01.760 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:01.760 06:44:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:01.760 06:44:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:01.760 06:44:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:01.760 06:44:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:01.760 06:44:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:01.760 06:44:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:01.760 06:44:06 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:01.760 06:44:06 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:01.760 06:44:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:01.760 06:44:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.760 06:44:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:01.760 06:44:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.760 06:44:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:01.760 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:01.760 06:44:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.760 06:44:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:01.760 06:44:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.760 06:44:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:01.760 06:44:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.760 06:44:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:01.760 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:01.760 06:44:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.760 06:44:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:01.760 06:44:06 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:01.760 06:44:06 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:01.760 06:44:06 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:01.760 06:44:06 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:01.760 06:44:06 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:01.760 06:44:06 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:01.760 06:44:06 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:01.760 06:44:06 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:01.760 06:44:06 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:01.760 06:44:06 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:01.760 06:44:06 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:01.760 06:44:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:01.760 06:44:06 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:01.760 06:44:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:01.760 06:44:06 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:01.760 06:44:06 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:01.760 06:44:06 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:01.760 06:44:06 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:01.760 06:44:06 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:01.760 06:44:06 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:01.760 06:44:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:01.760 06:44:06 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:01.760 06:44:06 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:01.760 06:44:06 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:01.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:01.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:18:01.760 00:18:01.760 --- 10.0.0.2 ping statistics --- 00:18:01.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.760 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:18:01.760 06:44:06 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:01.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:01.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:18:01.760 00:18:01.760 --- 10.0.0.1 ping statistics --- 00:18:01.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.760 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:18:01.760 06:44:06 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:01.760 06:44:06 -- nvmf/common.sh@411 -- # return 0 00:18:01.760 06:44:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:01.760 06:44:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:01.760 06:44:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:01.760 06:44:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:01.760 06:44:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:01.760 06:44:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:01.760 06:44:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:01.760 06:44:06 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:01.760 06:44:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:01.760 06:44:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:01.760 06:44:06 -- common/autotest_common.sh@10 -- # set +x 00:18:01.760 06:44:06 -- nvmf/common.sh@470 -- # nvmfpid=4180668 00:18:01.760 06:44:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:01.760 06:44:06 -- nvmf/common.sh@471 -- # waitforlisten 4180668 00:18:01.760 06:44:06 -- common/autotest_common.sh@817 -- # '[' -z 4180668 ']' 00:18:01.760 06:44:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.760 06:44:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:01.760 06:44:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.760 06:44:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:01.760 06:44:06 -- common/autotest_common.sh@10 -- # set +x 00:18:01.760 [2024-04-17 06:44:06.308144] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:18:01.760 [2024-04-17 06:44:06.308238] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.760 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.020 [2024-04-17 06:44:06.372919] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.020 [2024-04-17 06:44:06.457143] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.020 [2024-04-17 06:44:06.457232] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.020 [2024-04-17 06:44:06.457246] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:02.020 [2024-04-17 06:44:06.457258] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:02.020 [2024-04-17 06:44:06.457267] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.020 [2024-04-17 06:44:06.457300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.020 06:44:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:02.020 06:44:06 -- common/autotest_common.sh@850 -- # return 0 00:18:02.020 06:44:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:02.020 06:44:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:02.020 06:44:06 -- common/autotest_common.sh@10 -- # set +x 00:18:02.020 06:44:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.020 06:44:06 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:02.020 06:44:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:02.020 06:44:06 -- common/autotest_common.sh@10 -- # set +x 00:18:02.020 [2024-04-17 06:44:06.593097] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:02.021 06:44:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:02.021 06:44:06 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:02.021 06:44:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:02.021 06:44:06 -- common/autotest_common.sh@10 -- # set +x 00:18:02.279 Malloc0 00:18:02.279 06:44:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:02.279 06:44:06 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:02.279 06:44:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:02.279 06:44:06 -- common/autotest_common.sh@10 -- # set +x 00:18:02.279 06:44:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:02.279 06:44:06 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:02.279 06:44:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:02.279 06:44:06 -- common/autotest_common.sh@10 -- # set +x 00:18:02.279 06:44:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:02.279 06:44:06 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:02.279 06:44:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:02.279 06:44:06 -- common/autotest_common.sh@10 -- # set +x 00:18:02.279 [2024-04-17 06:44:06.659648] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:02.279 06:44:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:02.279 06:44:06 -- target/queue_depth.sh@30 -- # bdevperf_pid=4180811 00:18:02.279 06:44:06 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:02.279 06:44:06 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:02.279 06:44:06 -- target/queue_depth.sh@33 -- # waitforlisten 4180811 /var/tmp/bdevperf.sock 00:18:02.279 06:44:06 -- common/autotest_common.sh@817 -- # '[' -z 4180811 ']' 00:18:02.279 06:44:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:02.279 06:44:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:02.279 06:44:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:02.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:02.279 06:44:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:02.279 06:44:06 -- common/autotest_common.sh@10 -- # set +x 00:18:02.279 [2024-04-17 06:44:06.704116] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:18:02.279 [2024-04-17 06:44:06.704202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4180811 ] 00:18:02.279 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.279 [2024-04-17 06:44:06.765770] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.279 [2024-04-17 06:44:06.855035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.537 06:44:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:02.537 06:44:06 -- common/autotest_common.sh@850 -- # return 0 00:18:02.537 06:44:06 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:02.537 06:44:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:02.537 06:44:06 -- common/autotest_common.sh@10 -- # set +x 00:18:02.537 NVMe0n1 00:18:02.537 06:44:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:02.537 06:44:07 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:02.795 Running I/O for 10 seconds... 00:18:12.797 00:18:12.797 Latency(us) 00:18:12.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.797 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:12.797 Verification LBA range: start 0x0 length 0x4000 00:18:12.797 NVMe0n1 : 10.09 8374.81 32.71 0.00 0.00 121642.61 24466.77 78060.66 00:18:12.797 =================================================================================================================== 00:18:12.797 Total : 8374.81 32.71 0.00 0.00 121642.61 24466.77 78060.66 00:18:12.797 0 00:18:12.797 06:44:17 -- target/queue_depth.sh@39 -- # killprocess 4180811 00:18:12.797 06:44:17 -- common/autotest_common.sh@936 -- # '[' -z 4180811 ']' 00:18:12.798 06:44:17 -- common/autotest_common.sh@940 -- # kill -0 4180811 00:18:12.798 06:44:17 -- common/autotest_common.sh@941 -- # uname 00:18:12.798 06:44:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:12.798 06:44:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4180811 00:18:12.798 06:44:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:12.798 06:44:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:12.798 06:44:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4180811' 00:18:12.798 killing process with pid 4180811 00:18:12.798 06:44:17 -- common/autotest_common.sh@955 -- # kill 4180811 00:18:12.798 Received shutdown signal, test time was about 10.000000 seconds 00:18:12.798 00:18:12.798 Latency(us) 00:18:12.798 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.798 =================================================================================================================== 00:18:12.798 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:12.798 06:44:17 -- common/autotest_common.sh@960 -- # wait 4180811 00:18:13.055 06:44:17 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:13.055 06:44:17 -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:13.055 06:44:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:13.055 06:44:17 -- nvmf/common.sh@117 -- # sync 00:18:13.055 06:44:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:13.055 06:44:17 -- nvmf/common.sh@120 -- # set +e 00:18:13.055 06:44:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:13.055 06:44:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:13.055 rmmod nvme_tcp 00:18:13.055 rmmod nvme_fabrics 00:18:13.055 rmmod nvme_keyring 00:18:13.055 06:44:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:13.055 06:44:17 -- nvmf/common.sh@124 -- # set -e 00:18:13.055 06:44:17 -- nvmf/common.sh@125 -- # return 0 00:18:13.055 06:44:17 -- nvmf/common.sh@478 -- # '[' -n 4180668 ']' 00:18:13.055 06:44:17 -- nvmf/common.sh@479 -- # killprocess 4180668 00:18:13.055 06:44:17 -- common/autotest_common.sh@936 -- # '[' -z 4180668 ']' 00:18:13.055 06:44:17 -- common/autotest_common.sh@940 -- # kill -0 4180668 00:18:13.055 06:44:17 -- common/autotest_common.sh@941 -- # uname 00:18:13.055 06:44:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:13.055 06:44:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4180668 00:18:13.055 06:44:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:13.055 06:44:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:13.055 06:44:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4180668' 00:18:13.055 killing process with pid 4180668 00:18:13.055 06:44:17 -- common/autotest_common.sh@955 -- # kill 4180668 00:18:13.055 06:44:17 -- common/autotest_common.sh@960 -- # wait 4180668 00:18:13.314 06:44:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:13.314 06:44:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:13.314 06:44:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:13.314 06:44:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:13.314 06:44:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:13.314 06:44:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.314 06:44:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.314 06:44:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.847 06:44:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:15.847 00:18:15.848 real 0m15.882s 00:18:15.848 user 0m22.395s 00:18:15.848 sys 0m2.978s 00:18:15.848 06:44:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:15.848 06:44:19 -- common/autotest_common.sh@10 -- # set +x 00:18:15.848 ************************************ 00:18:15.848 END TEST nvmf_queue_depth 00:18:15.848 ************************************ 00:18:15.848 06:44:19 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:15.848 06:44:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:15.848 06:44:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:15.848 06:44:19 -- common/autotest_common.sh@10 -- # set +x 00:18:15.848 ************************************ 00:18:15.848 START TEST nvmf_multipath 00:18:15.848 ************************************ 00:18:15.848 06:44:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:15.848 * Looking for test storage... 00:18:15.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:15.848 06:44:20 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:15.848 06:44:20 -- nvmf/common.sh@7 -- # uname -s 00:18:15.848 06:44:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:15.848 06:44:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:15.848 06:44:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:15.848 06:44:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:15.848 06:44:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:15.848 06:44:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:15.848 06:44:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:15.848 06:44:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:15.848 06:44:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:15.848 06:44:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:15.848 06:44:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:15.848 06:44:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:15.848 06:44:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:15.848 06:44:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:15.848 06:44:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:15.848 06:44:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:15.848 06:44:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:15.848 06:44:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:15.848 06:44:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.848 06:44:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.848 06:44:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.848 06:44:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.848 06:44:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.848 06:44:20 -- paths/export.sh@5 -- # export PATH 00:18:15.848 06:44:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.848 06:44:20 -- nvmf/common.sh@47 -- # : 0 00:18:15.848 06:44:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:15.848 06:44:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:15.848 06:44:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:15.848 06:44:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:15.848 06:44:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:15.848 06:44:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:15.848 06:44:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:15.848 06:44:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:15.848 06:44:20 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:15.848 06:44:20 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:15.848 06:44:20 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:15.848 06:44:20 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:15.848 06:44:20 -- target/multipath.sh@43 -- # nvmftestinit 00:18:15.848 06:44:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:15.848 06:44:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:15.848 06:44:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:15.848 06:44:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:15.848 06:44:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:15.848 06:44:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.848 06:44:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.848 06:44:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.848 06:44:20 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:15.848 06:44:20 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:15.848 06:44:20 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:15.848 06:44:20 -- common/autotest_common.sh@10 -- # set +x 00:18:17.750 06:44:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:17.750 06:44:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:17.750 06:44:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:17.750 06:44:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:17.750 06:44:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:17.750 06:44:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:17.750 06:44:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:17.750 06:44:22 -- nvmf/common.sh@295 -- # net_devs=() 00:18:17.750 06:44:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:17.750 06:44:22 -- nvmf/common.sh@296 -- # e810=() 00:18:17.750 06:44:22 -- nvmf/common.sh@296 -- # local -ga e810 00:18:17.750 06:44:22 -- nvmf/common.sh@297 -- # x722=() 00:18:17.750 06:44:22 -- nvmf/common.sh@297 -- # local -ga x722 00:18:17.750 06:44:22 -- nvmf/common.sh@298 -- # mlx=() 00:18:17.750 06:44:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:17.750 06:44:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:17.750 06:44:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:17.750 06:44:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:17.750 06:44:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:17.750 06:44:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:17.750 06:44:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:17.750 06:44:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:17.750 06:44:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:17.750 06:44:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:17.750 06:44:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:17.750 06:44:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:17.750 06:44:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:17.750 06:44:22 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:17.750 06:44:22 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:17.750 06:44:22 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:17.750 06:44:22 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:17.750 06:44:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:17.750 06:44:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:17.750 06:44:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:17.750 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:17.750 06:44:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:17.750 06:44:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:17.750 06:44:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.750 06:44:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.750 06:44:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:17.751 06:44:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:17.751 06:44:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:17.751 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:17.751 06:44:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:17.751 06:44:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:17.751 06:44:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.751 06:44:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.751 06:44:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:17.751 06:44:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:17.751 06:44:22 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:17.751 06:44:22 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:17.751 06:44:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:17.751 06:44:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.751 06:44:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:17.751 06:44:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.751 06:44:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:17.751 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:17.751 06:44:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.751 06:44:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:17.751 06:44:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.751 06:44:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:17.751 06:44:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.751 06:44:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:17.751 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:17.751 06:44:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.751 06:44:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:17.751 06:44:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:17.751 06:44:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:17.751 06:44:22 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:17.751 06:44:22 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:17.751 06:44:22 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:17.751 06:44:22 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:17.751 06:44:22 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:17.751 06:44:22 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:17.751 06:44:22 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:17.751 06:44:22 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:17.751 06:44:22 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:17.751 06:44:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:17.751 06:44:22 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:17.751 06:44:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:17.751 06:44:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:17.751 06:44:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:17.751 06:44:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:17.751 06:44:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:17.751 06:44:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:17.751 06:44:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:17.751 06:44:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:17.751 06:44:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:17.751 06:44:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:17.751 06:44:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:17.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:17.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:18:17.751 00:18:17.751 --- 10.0.0.2 ping statistics --- 00:18:17.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.751 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:18:17.751 06:44:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:17.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:17.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:18:17.751 00:18:17.751 --- 10.0.0.1 ping statistics --- 00:18:17.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.751 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:18:17.751 06:44:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:17.751 06:44:22 -- nvmf/common.sh@411 -- # return 0 00:18:17.751 06:44:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:17.751 06:44:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:17.751 06:44:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:17.751 06:44:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:17.751 06:44:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:17.751 06:44:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:17.751 06:44:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:17.751 06:44:22 -- target/multipath.sh@45 -- # '[' -z ']' 00:18:17.751 06:44:22 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:17.751 only one NIC for nvmf test 00:18:17.751 06:44:22 -- target/multipath.sh@47 -- # nvmftestfini 00:18:17.751 06:44:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:17.751 06:44:22 -- nvmf/common.sh@117 -- # sync 00:18:17.751 06:44:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:17.751 06:44:22 -- nvmf/common.sh@120 -- # set +e 00:18:17.751 06:44:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:17.751 06:44:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:17.751 rmmod nvme_tcp 00:18:17.751 rmmod nvme_fabrics 00:18:17.751 rmmod nvme_keyring 00:18:17.751 06:44:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:17.751 06:44:22 -- nvmf/common.sh@124 -- # set -e 00:18:17.751 06:44:22 -- nvmf/common.sh@125 -- # return 0 00:18:17.751 06:44:22 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:18:17.751 06:44:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:17.751 06:44:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:17.751 06:44:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:17.751 06:44:22 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:17.751 06:44:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:17.751 06:44:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.751 06:44:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.751 06:44:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.654 06:44:24 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:19.654 06:44:24 -- target/multipath.sh@48 -- # exit 0 00:18:19.654 06:44:24 -- target/multipath.sh@1 -- # nvmftestfini 00:18:19.654 06:44:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:19.654 06:44:24 -- nvmf/common.sh@117 -- # sync 00:18:19.654 06:44:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:19.654 06:44:24 -- nvmf/common.sh@120 -- # set +e 00:18:19.654 06:44:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:19.654 06:44:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:19.654 06:44:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:19.654 06:44:24 -- nvmf/common.sh@124 -- # set -e 00:18:19.655 06:44:24 -- nvmf/common.sh@125 -- # return 0 00:18:19.655 06:44:24 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:18:19.655 06:44:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:19.655 06:44:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:19.655 06:44:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:19.655 06:44:24 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:19.655 06:44:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:19.655 06:44:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.655 06:44:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.655 06:44:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.655 06:44:24 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:19.655 00:18:19.655 real 0m4.219s 00:18:19.655 user 0m0.747s 00:18:19.655 sys 0m1.459s 00:18:19.655 06:44:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:19.655 06:44:24 -- common/autotest_common.sh@10 -- # set +x 00:18:19.655 ************************************ 00:18:19.655 END TEST nvmf_multipath 00:18:19.655 ************************************ 00:18:19.914 06:44:24 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:19.914 06:44:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:19.914 06:44:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:19.914 06:44:24 -- common/autotest_common.sh@10 -- # set +x 00:18:19.914 ************************************ 00:18:19.914 START TEST nvmf_zcopy 00:18:19.914 ************************************ 00:18:19.914 06:44:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:19.914 * Looking for test storage... 00:18:19.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:19.914 06:44:24 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:19.914 06:44:24 -- nvmf/common.sh@7 -- # uname -s 00:18:19.914 06:44:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:19.914 06:44:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:19.914 06:44:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:19.914 06:44:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:19.914 06:44:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:19.914 06:44:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:19.914 06:44:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:19.914 06:44:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:19.914 06:44:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:19.914 06:44:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:19.914 06:44:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:19.914 06:44:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:19.914 06:44:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:19.914 06:44:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:19.914 06:44:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:19.914 06:44:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:19.914 06:44:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:19.914 06:44:24 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.914 06:44:24 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.914 06:44:24 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.914 06:44:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.914 06:44:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.914 06:44:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.914 06:44:24 -- paths/export.sh@5 -- # export PATH 00:18:19.914 06:44:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.914 06:44:24 -- nvmf/common.sh@47 -- # : 0 00:18:19.914 06:44:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:19.914 06:44:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:19.914 06:44:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:19.914 06:44:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:19.914 06:44:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:19.914 06:44:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:19.914 06:44:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:19.914 06:44:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:19.914 06:44:24 -- target/zcopy.sh@12 -- # nvmftestinit 00:18:19.914 06:44:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:19.914 06:44:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:19.914 06:44:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:19.914 06:44:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:19.914 06:44:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:19.914 06:44:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.914 06:44:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.914 06:44:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.914 06:44:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:19.914 06:44:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:19.914 06:44:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:19.914 06:44:24 -- common/autotest_common.sh@10 -- # set +x 00:18:22.445 06:44:26 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:22.445 06:44:26 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:22.445 06:44:26 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:22.445 06:44:26 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:22.445 06:44:26 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:22.445 06:44:26 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:22.445 06:44:26 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:22.445 06:44:26 -- nvmf/common.sh@295 -- # net_devs=() 00:18:22.445 06:44:26 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:22.445 06:44:26 -- nvmf/common.sh@296 -- # e810=() 00:18:22.445 06:44:26 -- nvmf/common.sh@296 -- # local -ga e810 00:18:22.445 06:44:26 -- nvmf/common.sh@297 -- # x722=() 00:18:22.445 06:44:26 -- nvmf/common.sh@297 -- # local -ga x722 00:18:22.445 06:44:26 -- nvmf/common.sh@298 -- # mlx=() 00:18:22.445 06:44:26 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:22.445 06:44:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:22.445 06:44:26 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:22.445 06:44:26 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:22.445 06:44:26 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:22.445 06:44:26 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:22.445 06:44:26 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:22.445 06:44:26 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:22.445 06:44:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:22.445 06:44:26 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:22.445 06:44:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:22.445 06:44:26 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:22.445 06:44:26 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:22.445 06:44:26 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:22.445 06:44:26 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:22.445 06:44:26 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:22.445 06:44:26 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:22.445 06:44:26 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:22.445 06:44:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:22.445 06:44:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:22.445 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:22.445 06:44:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:22.445 06:44:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:22.445 06:44:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:22.445 06:44:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:22.445 06:44:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:22.445 06:44:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:22.445 06:44:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:22.445 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:22.445 06:44:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:22.445 06:44:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:22.445 06:44:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:22.445 06:44:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:22.445 06:44:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:22.445 06:44:26 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:22.445 06:44:26 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:22.445 06:44:26 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:22.445 06:44:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:22.445 06:44:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:22.445 06:44:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:22.445 06:44:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:22.445 06:44:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:22.445 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:22.445 06:44:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:22.445 06:44:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:22.445 06:44:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:22.445 06:44:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:22.445 06:44:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:22.445 06:44:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:22.445 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:22.445 06:44:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:22.445 06:44:26 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:22.445 06:44:26 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:22.445 06:44:26 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:22.445 06:44:26 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:22.445 06:44:26 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:22.445 06:44:26 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:22.445 06:44:26 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:22.445 06:44:26 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:22.445 06:44:26 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:22.445 06:44:26 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:22.445 06:44:26 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:22.445 06:44:26 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:22.445 06:44:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:22.445 06:44:26 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:22.445 06:44:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:22.445 06:44:26 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:22.445 06:44:26 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:22.445 06:44:26 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:22.445 06:44:26 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:22.445 06:44:26 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:22.445 06:44:26 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:22.445 06:44:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:22.445 06:44:26 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:22.445 06:44:26 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:22.445 06:44:26 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:22.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:22.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:18:22.445 00:18:22.445 --- 10.0.0.2 ping statistics --- 00:18:22.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.445 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:18:22.445 06:44:26 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:22.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:22.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:18:22.445 00:18:22.445 --- 10.0.0.1 ping statistics --- 00:18:22.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.445 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:18:22.446 06:44:26 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:22.446 06:44:26 -- nvmf/common.sh@411 -- # return 0 00:18:22.446 06:44:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:22.446 06:44:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:22.446 06:44:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:22.446 06:44:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:22.446 06:44:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:22.446 06:44:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:22.446 06:44:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:22.446 06:44:26 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:22.446 06:44:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:22.446 06:44:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:22.446 06:44:26 -- common/autotest_common.sh@10 -- # set +x 00:18:22.446 06:44:26 -- nvmf/common.sh@470 -- # nvmfpid=4185870 00:18:22.446 06:44:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:22.446 06:44:26 -- nvmf/common.sh@471 -- # waitforlisten 4185870 00:18:22.446 06:44:26 -- common/autotest_common.sh@817 -- # '[' -z 4185870 ']' 00:18:22.446 06:44:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.446 06:44:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:22.446 06:44:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.446 06:44:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:22.446 06:44:26 -- common/autotest_common.sh@10 -- # set +x 00:18:22.446 [2024-04-17 06:44:26.719407] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:18:22.446 [2024-04-17 06:44:26.719498] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.446 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.446 [2024-04-17 06:44:26.791414] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.446 [2024-04-17 06:44:26.885617] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:22.446 [2024-04-17 06:44:26.885673] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:22.446 [2024-04-17 06:44:26.885699] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:22.446 [2024-04-17 06:44:26.885712] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:22.446 [2024-04-17 06:44:26.885738] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:22.446 [2024-04-17 06:44:26.885775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.446 06:44:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:22.446 06:44:26 -- common/autotest_common.sh@850 -- # return 0 00:18:22.446 06:44:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:22.446 06:44:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:22.446 06:44:26 -- common/autotest_common.sh@10 -- # set +x 00:18:22.446 06:44:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.446 06:44:27 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:22.446 06:44:27 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:22.446 06:44:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:22.446 06:44:27 -- common/autotest_common.sh@10 -- # set +x 00:18:22.446 [2024-04-17 06:44:27.026370] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:22.446 06:44:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:22.446 06:44:27 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:22.446 06:44:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:22.446 06:44:27 -- common/autotest_common.sh@10 -- # set +x 00:18:22.446 06:44:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:22.446 06:44:27 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:22.446 06:44:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:22.446 06:44:27 -- common/autotest_common.sh@10 -- # set +x 00:18:22.446 [2024-04-17 06:44:27.042579] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.446 06:44:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:22.446 06:44:27 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:22.446 06:44:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:22.446 06:44:27 -- common/autotest_common.sh@10 -- # set +x 00:18:22.705 06:44:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:22.705 06:44:27 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:22.705 06:44:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:22.705 06:44:27 -- common/autotest_common.sh@10 -- # set +x 00:18:22.705 malloc0 00:18:22.705 06:44:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:22.705 06:44:27 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:22.705 06:44:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:22.705 06:44:27 -- common/autotest_common.sh@10 -- # set +x 00:18:22.705 06:44:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:22.705 06:44:27 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:22.705 06:44:27 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:22.705 06:44:27 -- nvmf/common.sh@521 -- # config=() 00:18:22.705 06:44:27 -- nvmf/common.sh@521 -- # local subsystem config 00:18:22.705 06:44:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:22.705 06:44:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:22.705 { 00:18:22.705 "params": { 00:18:22.705 "name": "Nvme$subsystem", 00:18:22.705 "trtype": "$TEST_TRANSPORT", 00:18:22.705 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:22.705 "adrfam": "ipv4", 00:18:22.705 "trsvcid": "$NVMF_PORT", 00:18:22.705 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:22.705 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:22.705 "hdgst": ${hdgst:-false}, 00:18:22.705 "ddgst": ${ddgst:-false} 00:18:22.705 }, 00:18:22.705 "method": "bdev_nvme_attach_controller" 00:18:22.705 } 00:18:22.705 EOF 00:18:22.705 )") 00:18:22.705 06:44:27 -- nvmf/common.sh@543 -- # cat 00:18:22.705 06:44:27 -- nvmf/common.sh@545 -- # jq . 00:18:22.705 06:44:27 -- nvmf/common.sh@546 -- # IFS=, 00:18:22.705 06:44:27 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:22.705 "params": { 00:18:22.705 "name": "Nvme1", 00:18:22.705 "trtype": "tcp", 00:18:22.705 "traddr": "10.0.0.2", 00:18:22.705 "adrfam": "ipv4", 00:18:22.705 "trsvcid": "4420", 00:18:22.705 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.705 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:22.705 "hdgst": false, 00:18:22.705 "ddgst": false 00:18:22.705 }, 00:18:22.705 "method": "bdev_nvme_attach_controller" 00:18:22.705 }' 00:18:22.705 [2024-04-17 06:44:27.121038] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:18:22.705 [2024-04-17 06:44:27.121104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4186020 ] 00:18:22.705 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.705 [2024-04-17 06:44:27.181798] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.705 [2024-04-17 06:44:27.270714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.705 [2024-04-17 06:44:27.279463] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:18:23.271 Running I/O for 10 seconds... 00:18:33.260 00:18:33.260 Latency(us) 00:18:33.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.260 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:33.260 Verification LBA range: start 0x0 length 0x1000 00:18:33.260 Nvme1n1 : 10.02 4703.36 36.74 0.00 0.00 27145.67 3932.16 38253.61 00:18:33.260 =================================================================================================================== 00:18:33.260 Total : 4703.36 36.74 0.00 0.00 27145.67 3932.16 38253.61 00:18:33.260 06:44:37 -- target/zcopy.sh@39 -- # perfpid=4187205 00:18:33.260 06:44:37 -- target/zcopy.sh@41 -- # xtrace_disable 00:18:33.260 06:44:37 -- common/autotest_common.sh@10 -- # set +x 00:18:33.260 06:44:37 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:33.260 06:44:37 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:33.260 06:44:37 -- nvmf/common.sh@521 -- # config=() 00:18:33.260 06:44:37 -- nvmf/common.sh@521 -- # local subsystem config 00:18:33.260 06:44:37 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:33.260 06:44:37 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:33.260 { 00:18:33.260 "params": { 00:18:33.260 "name": "Nvme$subsystem", 00:18:33.260 "trtype": "$TEST_TRANSPORT", 00:18:33.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:33.260 "adrfam": "ipv4", 00:18:33.260 "trsvcid": "$NVMF_PORT", 00:18:33.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:33.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:33.260 "hdgst": ${hdgst:-false}, 00:18:33.260 "ddgst": ${ddgst:-false} 00:18:33.260 }, 00:18:33.260 "method": "bdev_nvme_attach_controller" 00:18:33.260 } 00:18:33.260 EOF 00:18:33.260 )") 00:18:33.260 [2024-04-17 06:44:37.850683] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.260 [2024-04-17 06:44:37.850739] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.260 06:44:37 -- nvmf/common.sh@543 -- # cat 00:18:33.260 06:44:37 -- nvmf/common.sh@545 -- # jq . 00:18:33.260 06:44:37 -- nvmf/common.sh@546 -- # IFS=, 00:18:33.260 06:44:37 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:33.260 "params": { 00:18:33.260 "name": "Nvme1", 00:18:33.260 "trtype": "tcp", 00:18:33.260 "traddr": "10.0.0.2", 00:18:33.260 "adrfam": "ipv4", 00:18:33.260 "trsvcid": "4420", 00:18:33.260 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.260 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:33.260 "hdgst": false, 00:18:33.260 "ddgst": false 00:18:33.260 }, 00:18:33.260 "method": "bdev_nvme_attach_controller" 00:18:33.260 }' 00:18:33.260 [2024-04-17 06:44:37.858633] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.260 [2024-04-17 06:44:37.858661] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.260 [2024-04-17 06:44:37.866646] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.260 [2024-04-17 06:44:37.866670] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.519 [2024-04-17 06:44:37.874662] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.519 [2024-04-17 06:44:37.874684] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.519 [2024-04-17 06:44:37.882680] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.519 [2024-04-17 06:44:37.882702] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.519 [2024-04-17 06:44:37.888206] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:18:33.519 [2024-04-17 06:44:37.888268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4187205 ] 00:18:33.519 [2024-04-17 06:44:37.890699] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.519 [2024-04-17 06:44:37.890719] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.519 [2024-04-17 06:44:37.898723] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.519 [2024-04-17 06:44:37.898745] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.519 [2024-04-17 06:44:37.906744] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.519 [2024-04-17 06:44:37.906765] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.519 [2024-04-17 06:44:37.914766] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.519 [2024-04-17 06:44:37.914786] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.519 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.519 [2024-04-17 06:44:37.922802] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.519 [2024-04-17 06:44:37.922827] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.519 [2024-04-17 06:44:37.930827] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.519 [2024-04-17 06:44:37.930852] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.519 [2024-04-17 06:44:37.938848] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.519 [2024-04-17 06:44:37.938873] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.519 [2024-04-17 06:44:37.946871] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.519 [2024-04-17 06:44:37.946896] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.519 [2024-04-17 06:44:37.952798] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.519 [2024-04-17 06:44:37.954892] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.519 [2024-04-17 06:44:37.954917] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.519 [2024-04-17 06:44:37.962951] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.519 [2024-04-17 06:44:37.963001] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.519 [2024-04-17 06:44:37.970955] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.519 [2024-04-17 06:44:37.970986] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.519 [2024-04-17 06:44:37.978962] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.519 [2024-04-17 06:44:37.978988] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.519 [2024-04-17 06:44:37.986983] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.519 [2024-04-17 06:44:37.987008] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.519 [2024-04-17 06:44:37.995004] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.519 [2024-04-17 06:44:37.995029] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.519 [2024-04-17 06:44:38.003030] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.519 [2024-04-17 06:44:38.003056] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.519 [2024-04-17 06:44:38.011078] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.519 [2024-04-17 06:44:38.011118] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.519 [2024-04-17 06:44:38.019079] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.519 [2024-04-17 06:44:38.019108] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.519 [2024-04-17 06:44:38.027097] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.519 [2024-04-17 06:44:38.027123] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.519 [2024-04-17 06:44:38.035114] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.519 [2024-04-17 06:44:38.035140] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.519 [2024-04-17 06:44:38.043136] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.519 [2024-04-17 06:44:38.043162] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.519 [2024-04-17 06:44:38.046867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.519 [2024-04-17 06:44:38.051161] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.519 [2024-04-17 06:44:38.051194] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.519 [2024-04-17 06:44:38.055597] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:18:33.519 [2024-04-17 06:44:38.059189] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.520 [2024-04-17 06:44:38.059228] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.520 [2024-04-17 06:44:38.067257] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.520 [2024-04-17 06:44:38.067292] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.520 [2024-04-17 06:44:38.075273] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.520 [2024-04-17 06:44:38.075309] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.520 [2024-04-17 06:44:38.083298] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.520 [2024-04-17 06:44:38.083337] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.520 [2024-04-17 06:44:38.091315] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.520 [2024-04-17 06:44:38.091352] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.520 [2024-04-17 06:44:38.099327] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.520 [2024-04-17 06:44:38.099365] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.520 [2024-04-17 06:44:38.107348] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.520 [2024-04-17 06:44:38.107396] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.520 [2024-04-17 06:44:38.115371] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.520 [2024-04-17 06:44:38.115402] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.520 [2024-04-17 06:44:38.123370] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.520 [2024-04-17 06:44:38.123397] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.778 [2024-04-17 06:44:38.131415] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.778 [2024-04-17 06:44:38.131467] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.778 [2024-04-17 06:44:38.139442] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.778 [2024-04-17 06:44:38.139477] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.778 [2024-04-17 06:44:38.147431] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.778 [2024-04-17 06:44:38.147469] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.778 [2024-04-17 06:44:38.155467] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.778 [2024-04-17 06:44:38.155492] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.778 [2024-04-17 06:44:38.163564] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.778 [2024-04-17 06:44:38.163603] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.778 [2024-04-17 06:44:38.171590] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.778 [2024-04-17 06:44:38.171620] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.778 [2024-04-17 06:44:38.179607] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.778 [2024-04-17 06:44:38.179634] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.778 [2024-04-17 06:44:38.187630] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.778 [2024-04-17 06:44:38.187657] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.778 [2024-04-17 06:44:38.195658] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.778 [2024-04-17 06:44:38.195687] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.778 [2024-04-17 06:44:38.203675] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.778 [2024-04-17 06:44:38.203703] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.778 [2024-04-17 06:44:38.211697] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.778 [2024-04-17 06:44:38.211724] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.778 [2024-04-17 06:44:38.219719] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.778 [2024-04-17 06:44:38.219744] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.778 [2024-04-17 06:44:38.227741] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.778 [2024-04-17 06:44:38.227767] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.778 [2024-04-17 06:44:38.235765] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.778 [2024-04-17 06:44:38.235791] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.778 [2024-04-17 06:44:38.243789] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.779 [2024-04-17 06:44:38.243816] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.779 [2024-04-17 06:44:38.251812] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.779 [2024-04-17 06:44:38.251838] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.779 [2024-04-17 06:44:38.259832] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.779 [2024-04-17 06:44:38.259864] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.779 [2024-04-17 06:44:38.267858] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.779 [2024-04-17 06:44:38.267883] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.779 [2024-04-17 06:44:38.275880] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.779 [2024-04-17 06:44:38.275904] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.779 [2024-04-17 06:44:38.283899] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.779 [2024-04-17 06:44:38.283925] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.779 [2024-04-17 06:44:38.291932] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.779 [2024-04-17 06:44:38.291959] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.779 [2024-04-17 06:44:38.299953] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.779 [2024-04-17 06:44:38.299978] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.779 [2024-04-17 06:44:38.307976] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.779 [2024-04-17 06:44:38.308000] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.779 [2024-04-17 06:44:38.316000] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.779 [2024-04-17 06:44:38.316025] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.779 [2024-04-17 06:44:38.324023] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.779 [2024-04-17 06:44:38.324050] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.779 [2024-04-17 06:44:38.332035] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.779 [2024-04-17 06:44:38.332057] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.779 [2024-04-17 06:44:38.340071] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.779 [2024-04-17 06:44:38.340097] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.779 [2024-04-17 06:44:38.348108] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.779 [2024-04-17 06:44:38.348138] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.779 [2024-04-17 06:44:38.356115] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.779 [2024-04-17 06:44:38.356141] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.779 Running I/O for 5 seconds... 00:18:33.779 [2024-04-17 06:44:38.364161] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.779 [2024-04-17 06:44:38.364194] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.779 [2024-04-17 06:44:38.378023] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.779 [2024-04-17 06:44:38.378057] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.037 [2024-04-17 06:44:38.389725] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.037 [2024-04-17 06:44:38.389757] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.037 [2024-04-17 06:44:38.401829] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.037 [2024-04-17 06:44:38.401862] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.037 [2024-04-17 06:44:38.413391] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.037 [2024-04-17 06:44:38.413420] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.037 [2024-04-17 06:44:38.425493] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.037 [2024-04-17 06:44:38.425526] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.037 [2024-04-17 06:44:38.437422] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.037 [2024-04-17 06:44:38.437450] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.037 [2024-04-17 06:44:38.449272] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.037 [2024-04-17 06:44:38.449301] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.037 [2024-04-17 06:44:38.460983] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.037 [2024-04-17 06:44:38.461014] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.037 [2024-04-17 06:44:38.472455] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.037 [2024-04-17 06:44:38.472501] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.037 [2024-04-17 06:44:38.483728] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.037 [2024-04-17 06:44:38.483760] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.037 [2024-04-17 06:44:38.495776] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.038 [2024-04-17 06:44:38.495808] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.038 [2024-04-17 06:44:38.507070] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.038 [2024-04-17 06:44:38.507101] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.038 [2024-04-17 06:44:38.518708] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.038 [2024-04-17 06:44:38.518738] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.038 [2024-04-17 06:44:38.530263] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.038 [2024-04-17 06:44:38.530291] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.038 [2024-04-17 06:44:38.542143] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.038 [2024-04-17 06:44:38.542173] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.038 [2024-04-17 06:44:38.553838] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.038 [2024-04-17 06:44:38.553868] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.038 [2024-04-17 06:44:38.565208] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.038 [2024-04-17 06:44:38.565255] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.038 [2024-04-17 06:44:38.576190] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.038 [2024-04-17 06:44:38.576241] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.038 [2024-04-17 06:44:38.587889] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.038 [2024-04-17 06:44:38.587920] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.038 [2024-04-17 06:44:38.599467] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.038 [2024-04-17 06:44:38.599498] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.038 [2024-04-17 06:44:38.610858] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.038 [2024-04-17 06:44:38.610889] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.038 [2024-04-17 06:44:38.622299] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.038 [2024-04-17 06:44:38.622327] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.038 [2024-04-17 06:44:38.633837] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.038 [2024-04-17 06:44:38.633868] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.296 [2024-04-17 06:44:38.645334] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.296 [2024-04-17 06:44:38.645362] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.296 [2024-04-17 06:44:38.656998] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.296 [2024-04-17 06:44:38.657028] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.296 [2024-04-17 06:44:38.668099] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.296 [2024-04-17 06:44:38.668126] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.296 [2024-04-17 06:44:38.681289] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.296 [2024-04-17 06:44:38.681326] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.296 [2024-04-17 06:44:38.691401] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.296 [2024-04-17 06:44:38.691428] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.296 [2024-04-17 06:44:38.702338] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.296 [2024-04-17 06:44:38.702366] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.296 [2024-04-17 06:44:38.713109] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.296 [2024-04-17 06:44:38.713136] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.296 [2024-04-17 06:44:38.723829] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.296 [2024-04-17 06:44:38.723857] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.296 [2024-04-17 06:44:38.736440] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.296 [2024-04-17 06:44:38.736468] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.296 [2024-04-17 06:44:38.747989] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.296 [2024-04-17 06:44:38.748017] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.296 [2024-04-17 06:44:38.757450] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.296 [2024-04-17 06:44:38.757479] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.296 [2024-04-17 06:44:38.769072] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.296 [2024-04-17 06:44:38.769100] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.296 [2024-04-17 06:44:38.779942] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.296 [2024-04-17 06:44:38.779970] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.296 [2024-04-17 06:44:38.790461] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.296 [2024-04-17 06:44:38.790489] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.296 [2024-04-17 06:44:38.801187] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.296 [2024-04-17 06:44:38.801214] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.297 [2024-04-17 06:44:38.812101] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.297 [2024-04-17 06:44:38.812130] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.297 [2024-04-17 06:44:38.823528] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.297 [2024-04-17 06:44:38.823556] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.297 [2024-04-17 06:44:38.834283] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.297 [2024-04-17 06:44:38.834311] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.297 [2024-04-17 06:44:38.845075] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.297 [2024-04-17 06:44:38.845102] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.297 [2024-04-17 06:44:38.855969] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.297 [2024-04-17 06:44:38.855996] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.297 [2024-04-17 06:44:38.866578] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.297 [2024-04-17 06:44:38.866607] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.297 [2024-04-17 06:44:38.876895] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.297 [2024-04-17 06:44:38.876924] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.297 [2024-04-17 06:44:38.887535] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.297 [2024-04-17 06:44:38.887564] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.297 [2024-04-17 06:44:38.898080] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.297 [2024-04-17 06:44:38.898107] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.555 [2024-04-17 06:44:38.911192] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.555 [2024-04-17 06:44:38.911220] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.555 [2024-04-17 06:44:38.921017] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.555 [2024-04-17 06:44:38.921045] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.555 [2024-04-17 06:44:38.932204] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.555 [2024-04-17 06:44:38.932243] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.555 [2024-04-17 06:44:38.942761] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.555 [2024-04-17 06:44:38.942789] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.555 [2024-04-17 06:44:38.953369] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.555 [2024-04-17 06:44:38.953398] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.555 [2024-04-17 06:44:38.963165] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.555 [2024-04-17 06:44:38.963205] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.555 [2024-04-17 06:44:38.974246] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.555 [2024-04-17 06:44:38.974275] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.555 [2024-04-17 06:44:38.984934] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.555 [2024-04-17 06:44:38.984963] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.555 [2024-04-17 06:44:38.995578] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.555 [2024-04-17 06:44:38.995606] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.555 [2024-04-17 06:44:39.006140] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.555 [2024-04-17 06:44:39.006168] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.555 [2024-04-17 06:44:39.017026] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.555 [2024-04-17 06:44:39.017055] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.555 [2024-04-17 06:44:39.027662] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.555 [2024-04-17 06:44:39.027690] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.555 [2024-04-17 06:44:39.038631] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.555 [2024-04-17 06:44:39.038659] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.555 [2024-04-17 06:44:39.049317] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.555 [2024-04-17 06:44:39.049344] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.555 [2024-04-17 06:44:39.060192] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.555 [2024-04-17 06:44:39.060228] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.555 [2024-04-17 06:44:39.072648] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.555 [2024-04-17 06:44:39.072676] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.555 [2024-04-17 06:44:39.082700] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.555 [2024-04-17 06:44:39.082727] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.555 [2024-04-17 06:44:39.093426] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.555 [2024-04-17 06:44:39.093453] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.555 [2024-04-17 06:44:39.103941] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.555 [2024-04-17 06:44:39.103969] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.555 [2024-04-17 06:44:39.114522] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.555 [2024-04-17 06:44:39.114549] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.555 [2024-04-17 06:44:39.125322] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.555 [2024-04-17 06:44:39.125350] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.555 [2024-04-17 06:44:39.135994] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.555 [2024-04-17 06:44:39.136022] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.555 [2024-04-17 06:44:39.146274] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.555 [2024-04-17 06:44:39.146302] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.555 [2024-04-17 06:44:39.156938] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.555 [2024-04-17 06:44:39.156965] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.813 [2024-04-17 06:44:39.167602] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.813 [2024-04-17 06:44:39.167630] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.813 [2024-04-17 06:44:39.178437] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.813 [2024-04-17 06:44:39.178465] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.813 [2024-04-17 06:44:39.189206] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.813 [2024-04-17 06:44:39.189234] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.813 [2024-04-17 06:44:39.199508] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.813 [2024-04-17 06:44:39.199536] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.813 [2024-04-17 06:44:39.209353] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.813 [2024-04-17 06:44:39.209381] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.813 [2024-04-17 06:44:39.219977] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.813 [2024-04-17 06:44:39.220005] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.813 [2024-04-17 06:44:39.232543] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.813 [2024-04-17 06:44:39.232571] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.813 [2024-04-17 06:44:39.242684] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.813 [2024-04-17 06:44:39.242712] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.814 [2024-04-17 06:44:39.253575] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.814 [2024-04-17 06:44:39.253603] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.814 [2024-04-17 06:44:39.265896] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.814 [2024-04-17 06:44:39.265932] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.814 [2024-04-17 06:44:39.275690] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.814 [2024-04-17 06:44:39.275718] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.814 [2024-04-17 06:44:39.286846] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.814 [2024-04-17 06:44:39.286874] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.814 [2024-04-17 06:44:39.297474] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.814 [2024-04-17 06:44:39.297502] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.814 [2024-04-17 06:44:39.308478] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.814 [2024-04-17 06:44:39.308505] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.814 [2024-04-17 06:44:39.319351] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.814 [2024-04-17 06:44:39.319379] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.814 [2024-04-17 06:44:39.329985] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.814 [2024-04-17 06:44:39.330014] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.814 [2024-04-17 06:44:39.340682] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.814 [2024-04-17 06:44:39.340710] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.814 [2024-04-17 06:44:39.351550] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.814 [2024-04-17 06:44:39.351580] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.814 [2024-04-17 06:44:39.362406] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.814 [2024-04-17 06:44:39.362441] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.814 [2024-04-17 06:44:39.373270] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.814 [2024-04-17 06:44:39.373299] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.814 [2024-04-17 06:44:39.386014] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.814 [2024-04-17 06:44:39.386042] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.814 [2024-04-17 06:44:39.395645] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.814 [2024-04-17 06:44:39.395672] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.814 [2024-04-17 06:44:39.406883] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.814 [2024-04-17 06:44:39.406911] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.814 [2024-04-17 06:44:39.419389] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.814 [2024-04-17 06:44:39.419417] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.072 [2024-04-17 06:44:39.428970] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.072 [2024-04-17 06:44:39.428998] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.072 [2024-04-17 06:44:39.440513] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.072 [2024-04-17 06:44:39.440541] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.072 [2024-04-17 06:44:39.451380] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.072 [2024-04-17 06:44:39.451409] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.072 [2024-04-17 06:44:39.461790] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.072 [2024-04-17 06:44:39.461817] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.072 [2024-04-17 06:44:39.472601] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.072 [2024-04-17 06:44:39.472637] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.072 [2024-04-17 06:44:39.483483] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.072 [2024-04-17 06:44:39.483511] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.072 [2024-04-17 06:44:39.496052] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.072 [2024-04-17 06:44:39.496080] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.072 [2024-04-17 06:44:39.505747] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.072 [2024-04-17 06:44:39.505774] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.072 [2024-04-17 06:44:39.518050] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.072 [2024-04-17 06:44:39.518080] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.072 [2024-04-17 06:44:39.529257] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.072 [2024-04-17 06:44:39.529285] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.072 [2024-04-17 06:44:39.542207] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.072 [2024-04-17 06:44:39.542251] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.072 [2024-04-17 06:44:39.553028] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.072 [2024-04-17 06:44:39.553058] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.072 [2024-04-17 06:44:39.565356] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.072 [2024-04-17 06:44:39.565384] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.072 [2024-04-17 06:44:39.576927] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.072 [2024-04-17 06:44:39.576959] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.072 [2024-04-17 06:44:39.589814] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.072 [2024-04-17 06:44:39.589845] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.072 [2024-04-17 06:44:39.600003] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.072 [2024-04-17 06:44:39.600033] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.072 [2024-04-17 06:44:39.611792] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.072 [2024-04-17 06:44:39.611822] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.072 [2024-04-17 06:44:39.623649] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.072 [2024-04-17 06:44:39.623680] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.072 [2024-04-17 06:44:39.635674] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.072 [2024-04-17 06:44:39.635705] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.072 [2024-04-17 06:44:39.647396] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.072 [2024-04-17 06:44:39.647424] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.072 [2024-04-17 06:44:39.659104] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.072 [2024-04-17 06:44:39.659135] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.072 [2024-04-17 06:44:39.670746] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.072 [2024-04-17 06:44:39.670776] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.330 [2024-04-17 06:44:39.684783] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.330 [2024-04-17 06:44:39.684814] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.330 [2024-04-17 06:44:39.696369] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.330 [2024-04-17 06:44:39.696404] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.330 [2024-04-17 06:44:39.709815] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.330 [2024-04-17 06:44:39.709845] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.330 [2024-04-17 06:44:39.720721] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.330 [2024-04-17 06:44:39.720751] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.330 [2024-04-17 06:44:39.731981] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.330 [2024-04-17 06:44:39.732011] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.330 [2024-04-17 06:44:39.743858] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.330 [2024-04-17 06:44:39.743888] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.330 [2024-04-17 06:44:39.755740] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.330 [2024-04-17 06:44:39.755770] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.330 [2024-04-17 06:44:39.767015] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.330 [2024-04-17 06:44:39.767045] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.330 [2024-04-17 06:44:39.778606] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.330 [2024-04-17 06:44:39.778636] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.330 [2024-04-17 06:44:39.790157] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.330 [2024-04-17 06:44:39.790197] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.330 [2024-04-17 06:44:39.801711] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.330 [2024-04-17 06:44:39.801741] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.330 [2024-04-17 06:44:39.813453] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.330 [2024-04-17 06:44:39.813486] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.330 [2024-04-17 06:44:39.824954] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.330 [2024-04-17 06:44:39.824984] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.330 [2024-04-17 06:44:39.837004] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.330 [2024-04-17 06:44:39.837035] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.330 [2024-04-17 06:44:39.848685] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.330 [2024-04-17 06:44:39.848716] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.330 [2024-04-17 06:44:39.860848] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.331 [2024-04-17 06:44:39.860879] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.331 [2024-04-17 06:44:39.872849] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.331 [2024-04-17 06:44:39.872881] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.331 [2024-04-17 06:44:39.884417] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.331 [2024-04-17 06:44:39.884446] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.331 [2024-04-17 06:44:39.896300] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.331 [2024-04-17 06:44:39.896328] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.331 [2024-04-17 06:44:39.907638] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.331 [2024-04-17 06:44:39.907666] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.331 [2024-04-17 06:44:39.919004] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.331 [2024-04-17 06:44:39.919036] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.331 [2024-04-17 06:44:39.930812] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.331 [2024-04-17 06:44:39.930843] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.588 [2024-04-17 06:44:39.942107] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.588 [2024-04-17 06:44:39.942138] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.588 [2024-04-17 06:44:39.953744] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.588 [2024-04-17 06:44:39.953774] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.588 [2024-04-17 06:44:39.965685] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.588 [2024-04-17 06:44:39.965716] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.588 [2024-04-17 06:44:39.977043] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.588 [2024-04-17 06:44:39.977074] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.588 [2024-04-17 06:44:39.988397] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.588 [2024-04-17 06:44:39.988425] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.588 [2024-04-17 06:44:40.000021] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.588 [2024-04-17 06:44:40.000051] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.588 [2024-04-17 06:44:40.011572] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.588 [2024-04-17 06:44:40.011612] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.588 [2024-04-17 06:44:40.022877] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.588 [2024-04-17 06:44:40.022911] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.588 [2024-04-17 06:44:40.034411] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.588 [2024-04-17 06:44:40.034444] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.588 [2024-04-17 06:44:40.046664] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.588 [2024-04-17 06:44:40.046699] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.588 [2024-04-17 06:44:40.058709] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.588 [2024-04-17 06:44:40.058741] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.588 [2024-04-17 06:44:40.069987] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.589 [2024-04-17 06:44:40.070020] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.589 [2024-04-17 06:44:40.081719] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.589 [2024-04-17 06:44:40.081751] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.589 [2024-04-17 06:44:40.093739] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.589 [2024-04-17 06:44:40.093771] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.589 [2024-04-17 06:44:40.105434] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.589 [2024-04-17 06:44:40.105462] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.589 [2024-04-17 06:44:40.117016] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.589 [2024-04-17 06:44:40.117046] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.589 [2024-04-17 06:44:40.129040] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.589 [2024-04-17 06:44:40.129071] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.589 [2024-04-17 06:44:40.140655] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.589 [2024-04-17 06:44:40.140686] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.589 [2024-04-17 06:44:40.152745] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.589 [2024-04-17 06:44:40.152776] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.589 [2024-04-17 06:44:40.164676] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.589 [2024-04-17 06:44:40.164707] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.589 [2024-04-17 06:44:40.176236] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.589 [2024-04-17 06:44:40.176264] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.589 [2024-04-17 06:44:40.189627] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.589 [2024-04-17 06:44:40.189659] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.846 [2024-04-17 06:44:40.200908] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.846 [2024-04-17 06:44:40.200938] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.846 [2024-04-17 06:44:40.212226] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.846 [2024-04-17 06:44:40.212255] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.846 [2024-04-17 06:44:40.223625] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.846 [2024-04-17 06:44:40.223656] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.846 [2024-04-17 06:44:40.234892] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.846 [2024-04-17 06:44:40.234923] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.846 [2024-04-17 06:44:40.246493] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.846 [2024-04-17 06:44:40.246523] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.846 [2024-04-17 06:44:40.257979] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.846 [2024-04-17 06:44:40.258010] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.846 [2024-04-17 06:44:40.269621] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.846 [2024-04-17 06:44:40.269653] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.846 [2024-04-17 06:44:40.281124] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.846 [2024-04-17 06:44:40.281155] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.846 [2024-04-17 06:44:40.292870] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.846 [2024-04-17 06:44:40.292901] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.846 [2024-04-17 06:44:40.306142] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.846 [2024-04-17 06:44:40.306182] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.846 [2024-04-17 06:44:40.316616] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.846 [2024-04-17 06:44:40.316647] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.846 [2024-04-17 06:44:40.328313] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.846 [2024-04-17 06:44:40.328341] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.846 [2024-04-17 06:44:40.339801] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.846 [2024-04-17 06:44:40.339832] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.846 [2024-04-17 06:44:40.353278] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.846 [2024-04-17 06:44:40.353306] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.846 [2024-04-17 06:44:40.364188] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.846 [2024-04-17 06:44:40.364233] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.847 [2024-04-17 06:44:40.375491] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.847 [2024-04-17 06:44:40.375523] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.847 [2024-04-17 06:44:40.386645] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.847 [2024-04-17 06:44:40.386677] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.847 [2024-04-17 06:44:40.398220] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.847 [2024-04-17 06:44:40.398263] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.847 [2024-04-17 06:44:40.409398] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.847 [2024-04-17 06:44:40.409426] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.847 [2024-04-17 06:44:40.421310] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.847 [2024-04-17 06:44:40.421338] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.847 [2024-04-17 06:44:40.432619] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.847 [2024-04-17 06:44:40.432650] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.847 [2024-04-17 06:44:40.444296] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.847 [2024-04-17 06:44:40.444325] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.105 [2024-04-17 06:44:40.455471] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.105 [2024-04-17 06:44:40.455503] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.105 [2024-04-17 06:44:40.466893] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.105 [2024-04-17 06:44:40.466924] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.105 [2024-04-17 06:44:40.478064] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.105 [2024-04-17 06:44:40.478095] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.105 [2024-04-17 06:44:40.491462] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.105 [2024-04-17 06:44:40.491507] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.105 [2024-04-17 06:44:40.502130] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.105 [2024-04-17 06:44:40.502161] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.105 [2024-04-17 06:44:40.513715] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.105 [2024-04-17 06:44:40.513743] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.105 [2024-04-17 06:44:40.524447] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.105 [2024-04-17 06:44:40.524476] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.105 [2024-04-17 06:44:40.535654] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.105 [2024-04-17 06:44:40.535682] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.105 [2024-04-17 06:44:40.546760] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.105 [2024-04-17 06:44:40.546788] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.105 [2024-04-17 06:44:40.559557] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.105 [2024-04-17 06:44:40.559589] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.105 [2024-04-17 06:44:40.570305] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.105 [2024-04-17 06:44:40.570334] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.105 [2024-04-17 06:44:40.581567] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.105 [2024-04-17 06:44:40.581598] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.105 [2024-04-17 06:44:40.595353] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.105 [2024-04-17 06:44:40.595380] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.105 [2024-04-17 06:44:40.606782] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.105 [2024-04-17 06:44:40.606813] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.105 [2024-04-17 06:44:40.617808] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.105 [2024-04-17 06:44:40.617839] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.105 [2024-04-17 06:44:40.628892] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.105 [2024-04-17 06:44:40.628922] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.105 [2024-04-17 06:44:40.642069] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.105 [2024-04-17 06:44:40.642100] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.105 [2024-04-17 06:44:40.652754] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.105 [2024-04-17 06:44:40.652784] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.105 [2024-04-17 06:44:40.663950] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.105 [2024-04-17 06:44:40.663982] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.105 [2024-04-17 06:44:40.677252] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.105 [2024-04-17 06:44:40.677279] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.105 [2024-04-17 06:44:40.687141] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.105 [2024-04-17 06:44:40.687171] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.105 [2024-04-17 06:44:40.699186] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.105 [2024-04-17 06:44:40.699216] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.105 [2024-04-17 06:44:40.710644] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.105 [2024-04-17 06:44:40.710675] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.363 [2024-04-17 06:44:40.721857] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.363 [2024-04-17 06:44:40.721887] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.363 [2024-04-17 06:44:40.732899] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.363 [2024-04-17 06:44:40.732929] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.363 [2024-04-17 06:44:40.746303] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.363 [2024-04-17 06:44:40.746331] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.363 [2024-04-17 06:44:40.756417] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.363 [2024-04-17 06:44:40.756444] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.363 [2024-04-17 06:44:40.767922] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.363 [2024-04-17 06:44:40.767953] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.363 [2024-04-17 06:44:40.781357] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.363 [2024-04-17 06:44:40.781386] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.363 [2024-04-17 06:44:40.792088] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.363 [2024-04-17 06:44:40.792126] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.363 [2024-04-17 06:44:40.803401] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.363 [2024-04-17 06:44:40.803429] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.363 [2024-04-17 06:44:40.814615] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.363 [2024-04-17 06:44:40.814645] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.363 [2024-04-17 06:44:40.826707] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.363 [2024-04-17 06:44:40.826738] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.363 [2024-04-17 06:44:40.838894] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.363 [2024-04-17 06:44:40.838925] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.363 [2024-04-17 06:44:40.850461] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.363 [2024-04-17 06:44:40.850505] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.363 [2024-04-17 06:44:40.862172] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.363 [2024-04-17 06:44:40.862228] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.363 [2024-04-17 06:44:40.873665] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.363 [2024-04-17 06:44:40.873696] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.363 [2024-04-17 06:44:40.885431] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.363 [2024-04-17 06:44:40.885486] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.363 [2024-04-17 06:44:40.899172] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.363 [2024-04-17 06:44:40.899214] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.363 [2024-04-17 06:44:40.910083] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.363 [2024-04-17 06:44:40.910114] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.363 [2024-04-17 06:44:40.921550] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.363 [2024-04-17 06:44:40.921582] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.363 [2024-04-17 06:44:40.932425] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.363 [2024-04-17 06:44:40.932452] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.363 [2024-04-17 06:44:40.943679] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.363 [2024-04-17 06:44:40.943710] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.363 [2024-04-17 06:44:40.954978] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.363 [2024-04-17 06:44:40.955009] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.363 [2024-04-17 06:44:40.966034] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.363 [2024-04-17 06:44:40.966064] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.622 [2024-04-17 06:44:40.979333] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.622 [2024-04-17 06:44:40.979360] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.622 [2024-04-17 06:44:40.989887] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.622 [2024-04-17 06:44:40.989918] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.622 [2024-04-17 06:44:41.001621] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.622 [2024-04-17 06:44:41.001653] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.622 [2024-04-17 06:44:41.013128] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.622 [2024-04-17 06:44:41.013166] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.622 [2024-04-17 06:44:41.024321] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.622 [2024-04-17 06:44:41.024348] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.622 [2024-04-17 06:44:41.035004] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.622 [2024-04-17 06:44:41.035035] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.622 [2024-04-17 06:44:41.046356] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.622 [2024-04-17 06:44:41.046384] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.622 [2024-04-17 06:44:41.057245] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.622 [2024-04-17 06:44:41.057272] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.622 [2024-04-17 06:44:41.070558] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.622 [2024-04-17 06:44:41.070588] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.622 [2024-04-17 06:44:41.081454] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.622 [2024-04-17 06:44:41.081498] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.622 [2024-04-17 06:44:41.092560] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.622 [2024-04-17 06:44:41.092591] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.622 [2024-04-17 06:44:41.103877] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.622 [2024-04-17 06:44:41.103908] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.622 [2024-04-17 06:44:41.115354] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.622 [2024-04-17 06:44:41.115382] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.622 [2024-04-17 06:44:41.126465] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.622 [2024-04-17 06:44:41.126496] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.622 [2024-04-17 06:44:41.137837] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.622 [2024-04-17 06:44:41.137868] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.622 [2024-04-17 06:44:41.149447] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.622 [2024-04-17 06:44:41.149478] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.622 [2024-04-17 06:44:41.160843] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.622 [2024-04-17 06:44:41.160874] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.622 [2024-04-17 06:44:41.172292] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.622 [2024-04-17 06:44:41.172320] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.622 [2024-04-17 06:44:41.183504] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.622 [2024-04-17 06:44:41.183535] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.622 [2024-04-17 06:44:41.194275] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.622 [2024-04-17 06:44:41.194303] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.622 [2024-04-17 06:44:41.205868] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.622 [2024-04-17 06:44:41.205899] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.622 [2024-04-17 06:44:41.217475] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.622 [2024-04-17 06:44:41.217520] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.880 [2024-04-17 06:44:41.228995] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.880 [2024-04-17 06:44:41.229034] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.880 [2024-04-17 06:44:41.240647] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.880 [2024-04-17 06:44:41.240677] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.880 [2024-04-17 06:44:41.252423] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.880 [2024-04-17 06:44:41.252466] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.880 [2024-04-17 06:44:41.263673] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.880 [2024-04-17 06:44:41.263703] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.880 [2024-04-17 06:44:41.275000] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.880 [2024-04-17 06:44:41.275031] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.880 [2024-04-17 06:44:41.286940] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.880 [2024-04-17 06:44:41.286972] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.880 [2024-04-17 06:44:41.298843] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.880 [2024-04-17 06:44:41.298874] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.880 [2024-04-17 06:44:41.310240] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.880 [2024-04-17 06:44:41.310268] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.880 [2024-04-17 06:44:41.321642] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.880 [2024-04-17 06:44:41.321673] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.881 [2024-04-17 06:44:41.333206] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.881 [2024-04-17 06:44:41.333251] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.881 [2024-04-17 06:44:41.344652] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.881 [2024-04-17 06:44:41.344682] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.881 [2024-04-17 06:44:41.356219] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.881 [2024-04-17 06:44:41.356263] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.881 [2024-04-17 06:44:41.367805] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.881 [2024-04-17 06:44:41.367836] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.881 [2024-04-17 06:44:41.379354] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.881 [2024-04-17 06:44:41.379382] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.881 [2024-04-17 06:44:41.391146] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.881 [2024-04-17 06:44:41.391184] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.881 [2024-04-17 06:44:41.402974] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.881 [2024-04-17 06:44:41.403004] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.881 [2024-04-17 06:44:41.414317] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.881 [2024-04-17 06:44:41.414345] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.881 [2024-04-17 06:44:41.426284] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.881 [2024-04-17 06:44:41.426312] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.881 [2024-04-17 06:44:41.438065] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.881 [2024-04-17 06:44:41.438096] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.881 [2024-04-17 06:44:41.449841] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.881 [2024-04-17 06:44:41.449879] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.881 [2024-04-17 06:44:41.463632] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.881 [2024-04-17 06:44:41.463663] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.881 [2024-04-17 06:44:41.474629] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.881 [2024-04-17 06:44:41.474660] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.881 [2024-04-17 06:44:41.486025] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.881 [2024-04-17 06:44:41.486056] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.139 [2024-04-17 06:44:41.499820] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.139 [2024-04-17 06:44:41.499851] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.139 [2024-04-17 06:44:41.510941] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.139 [2024-04-17 06:44:41.510972] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.139 [2024-04-17 06:44:41.521933] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.139 [2024-04-17 06:44:41.521964] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.139 [2024-04-17 06:44:41.535220] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.139 [2024-04-17 06:44:41.535263] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.139 [2024-04-17 06:44:41.546440] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.139 [2024-04-17 06:44:41.546468] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.139 [2024-04-17 06:44:41.557866] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.139 [2024-04-17 06:44:41.557896] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.139 [2024-04-17 06:44:41.569192] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.139 [2024-04-17 06:44:41.569239] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.139 [2024-04-17 06:44:41.580414] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.139 [2024-04-17 06:44:41.580443] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.139 [2024-04-17 06:44:41.592166] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.139 [2024-04-17 06:44:41.592206] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.139 [2024-04-17 06:44:41.603406] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.139 [2024-04-17 06:44:41.603434] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.139 [2024-04-17 06:44:41.614554] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.139 [2024-04-17 06:44:41.614585] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.139 [2024-04-17 06:44:41.625770] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.139 [2024-04-17 06:44:41.625801] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.139 [2024-04-17 06:44:41.637448] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.139 [2024-04-17 06:44:41.637493] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.139 [2024-04-17 06:44:41.649071] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.139 [2024-04-17 06:44:41.649102] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.139 [2024-04-17 06:44:41.660665] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.139 [2024-04-17 06:44:41.660696] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.139 [2024-04-17 06:44:41.672109] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.139 [2024-04-17 06:44:41.672140] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.139 [2024-04-17 06:44:41.683502] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.139 [2024-04-17 06:44:41.683533] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.139 [2024-04-17 06:44:41.695124] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.139 [2024-04-17 06:44:41.695155] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.139 [2024-04-17 06:44:41.706701] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.139 [2024-04-17 06:44:41.706733] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.139 [2024-04-17 06:44:41.717925] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.139 [2024-04-17 06:44:41.717956] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.139 [2024-04-17 06:44:41.729443] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.139 [2024-04-17 06:44:41.729471] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.139 [2024-04-17 06:44:41.741102] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.139 [2024-04-17 06:44:41.741130] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.397 [2024-04-17 06:44:41.752557] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.397 [2024-04-17 06:44:41.752588] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.397 [2024-04-17 06:44:41.765741] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.397 [2024-04-17 06:44:41.765773] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.397 [2024-04-17 06:44:41.776078] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.397 [2024-04-17 06:44:41.776109] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.397 [2024-04-17 06:44:41.788331] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.397 [2024-04-17 06:44:41.788358] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.397 [2024-04-17 06:44:41.800147] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.397 [2024-04-17 06:44:41.800185] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.397 [2024-04-17 06:44:41.811713] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.397 [2024-04-17 06:44:41.811741] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.397 [2024-04-17 06:44:41.823575] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.397 [2024-04-17 06:44:41.823606] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.397 [2024-04-17 06:44:41.835012] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.397 [2024-04-17 06:44:41.835043] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.397 [2024-04-17 06:44:41.848147] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.397 [2024-04-17 06:44:41.848185] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.397 [2024-04-17 06:44:41.858674] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.397 [2024-04-17 06:44:41.858705] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.397 [2024-04-17 06:44:41.869954] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.397 [2024-04-17 06:44:41.869985] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.397 [2024-04-17 06:44:41.881252] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.397 [2024-04-17 06:44:41.881280] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.397 [2024-04-17 06:44:41.892523] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.397 [2024-04-17 06:44:41.892554] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.397 [2024-04-17 06:44:41.903958] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.397 [2024-04-17 06:44:41.903989] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.397 [2024-04-17 06:44:41.915581] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.397 [2024-04-17 06:44:41.915612] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.397 [2024-04-17 06:44:41.927079] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.397 [2024-04-17 06:44:41.927109] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.397 [2024-04-17 06:44:41.938277] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.397 [2024-04-17 06:44:41.938308] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.397 [2024-04-17 06:44:41.949920] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.397 [2024-04-17 06:44:41.949950] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.397 [2024-04-17 06:44:41.961528] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.397 [2024-04-17 06:44:41.961558] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.398 [2024-04-17 06:44:41.975335] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.398 [2024-04-17 06:44:41.975362] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.398 [2024-04-17 06:44:41.986189] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.398 [2024-04-17 06:44:41.986234] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.398 [2024-04-17 06:44:41.997688] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.398 [2024-04-17 06:44:41.997719] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.655 [2024-04-17 06:44:42.009041] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.655 [2024-04-17 06:44:42.009071] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.655 [2024-04-17 06:44:42.020088] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.655 [2024-04-17 06:44:42.020119] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.655 [2024-04-17 06:44:42.033708] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.655 [2024-04-17 06:44:42.033738] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.656 [2024-04-17 06:44:42.044146] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.656 [2024-04-17 06:44:42.044185] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.656 [2024-04-17 06:44:42.055963] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.656 [2024-04-17 06:44:42.055994] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.656 [2024-04-17 06:44:42.067269] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.656 [2024-04-17 06:44:42.067297] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.656 [2024-04-17 06:44:42.078558] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.656 [2024-04-17 06:44:42.078588] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.656 [2024-04-17 06:44:42.089621] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.656 [2024-04-17 06:44:42.089652] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.656 [2024-04-17 06:44:42.102467] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.656 [2024-04-17 06:44:42.102498] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.656 [2024-04-17 06:44:42.112161] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.656 [2024-04-17 06:44:42.112215] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.656 [2024-04-17 06:44:42.124001] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.656 [2024-04-17 06:44:42.124031] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.656 [2024-04-17 06:44:42.135801] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.656 [2024-04-17 06:44:42.135833] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.656 [2024-04-17 06:44:42.147087] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.656 [2024-04-17 06:44:42.147117] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.656 [2024-04-17 06:44:42.158701] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.656 [2024-04-17 06:44:42.158731] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.656 [2024-04-17 06:44:42.170569] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.656 [2024-04-17 06:44:42.170597] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.656 [2024-04-17 06:44:42.181350] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.656 [2024-04-17 06:44:42.181378] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.656 [2024-04-17 06:44:42.192039] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.656 [2024-04-17 06:44:42.192066] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.656 [2024-04-17 06:44:42.202528] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.656 [2024-04-17 06:44:42.202555] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.656 [2024-04-17 06:44:42.213351] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.656 [2024-04-17 06:44:42.213378] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.656 [2024-04-17 06:44:42.224096] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.656 [2024-04-17 06:44:42.224124] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.656 [2024-04-17 06:44:42.237133] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.656 [2024-04-17 06:44:42.237161] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.656 [2024-04-17 06:44:42.246833] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.656 [2024-04-17 06:44:42.246860] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.656 [2024-04-17 06:44:42.257944] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.656 [2024-04-17 06:44:42.257971] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.914 [2024-04-17 06:44:42.268366] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.914 [2024-04-17 06:44:42.268394] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.914 [2024-04-17 06:44:42.278910] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.914 [2024-04-17 06:44:42.278937] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.914 [2024-04-17 06:44:42.289349] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.914 [2024-04-17 06:44:42.289376] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.914 [2024-04-17 06:44:42.299742] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.914 [2024-04-17 06:44:42.299769] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.914 [2024-04-17 06:44:42.310957] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.914 [2024-04-17 06:44:42.310984] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.914 [2024-04-17 06:44:42.321630] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.914 [2024-04-17 06:44:42.321657] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.914 [2024-04-17 06:44:42.332365] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.914 [2024-04-17 06:44:42.332392] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.914 [2024-04-17 06:44:42.343143] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.914 [2024-04-17 06:44:42.343171] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.914 [2024-04-17 06:44:42.353778] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.914 [2024-04-17 06:44:42.353805] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.914 [2024-04-17 06:44:42.364106] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.914 [2024-04-17 06:44:42.364133] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.914 [2024-04-17 06:44:42.375156] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.914 [2024-04-17 06:44:42.375192] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.914 [2024-04-17 06:44:42.386122] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.914 [2024-04-17 06:44:42.386149] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.914 [2024-04-17 06:44:42.397497] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.914 [2024-04-17 06:44:42.397525] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.914 [2024-04-17 06:44:42.408149] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.914 [2024-04-17 06:44:42.408183] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.914 [2024-04-17 06:44:42.419148] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.914 [2024-04-17 06:44:42.419185] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.914 [2024-04-17 06:44:42.430243] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.914 [2024-04-17 06:44:42.430270] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.914 [2024-04-17 06:44:42.441387] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.914 [2024-04-17 06:44:42.441414] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.914 [2024-04-17 06:44:42.452145] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.914 [2024-04-17 06:44:42.452173] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.914 [2024-04-17 06:44:42.463447] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.914 [2024-04-17 06:44:42.463473] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.914 [2024-04-17 06:44:42.474565] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.914 [2024-04-17 06:44:42.474592] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.914 [2024-04-17 06:44:42.485022] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.914 [2024-04-17 06:44:42.485050] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.914 [2024-04-17 06:44:42.495942] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.914 [2024-04-17 06:44:42.495969] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.914 [2024-04-17 06:44:42.506593] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.914 [2024-04-17 06:44:42.506620] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.914 [2024-04-17 06:44:42.517489] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.914 [2024-04-17 06:44:42.517525] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.172 [2024-04-17 06:44:42.528465] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.172 [2024-04-17 06:44:42.528492] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.172 [2024-04-17 06:44:42.539508] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.172 [2024-04-17 06:44:42.539535] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.172 [2024-04-17 06:44:42.550202] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.172 [2024-04-17 06:44:42.550229] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.172 [2024-04-17 06:44:42.560918] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.172 [2024-04-17 06:44:42.560945] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.172 [2024-04-17 06:44:42.573632] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.172 [2024-04-17 06:44:42.573659] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.172 [2024-04-17 06:44:42.583103] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.172 [2024-04-17 06:44:42.583130] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.172 [2024-04-17 06:44:42.594769] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.172 [2024-04-17 06:44:42.594796] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.172 [2024-04-17 06:44:42.605743] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.172 [2024-04-17 06:44:42.605770] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.172 [2024-04-17 06:44:42.616626] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.172 [2024-04-17 06:44:42.616653] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.172 [2024-04-17 06:44:42.627430] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.172 [2024-04-17 06:44:42.627457] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.172 [2024-04-17 06:44:42.638012] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.172 [2024-04-17 06:44:42.638039] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.172 [2024-04-17 06:44:42.648930] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.173 [2024-04-17 06:44:42.648957] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.173 [2024-04-17 06:44:42.659527] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.173 [2024-04-17 06:44:42.659553] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.173 [2024-04-17 06:44:42.670374] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.173 [2024-04-17 06:44:42.670401] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.173 [2024-04-17 06:44:42.681300] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.173 [2024-04-17 06:44:42.681327] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.173 [2024-04-17 06:44:42.692049] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.173 [2024-04-17 06:44:42.692076] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.173 [2024-04-17 06:44:42.704801] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.173 [2024-04-17 06:44:42.704829] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.173 [2024-04-17 06:44:42.714715] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.173 [2024-04-17 06:44:42.714742] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.173 [2024-04-17 06:44:42.726185] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.173 [2024-04-17 06:44:42.726220] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.173 [2024-04-17 06:44:42.738778] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.173 [2024-04-17 06:44:42.738806] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.173 [2024-04-17 06:44:42.748983] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.173 [2024-04-17 06:44:42.749012] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.173 [2024-04-17 06:44:42.760094] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.173 [2024-04-17 06:44:42.760122] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.173 [2024-04-17 06:44:42.772891] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.173 [2024-04-17 06:44:42.772919] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.431 [2024-04-17 06:44:42.784692] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.431 [2024-04-17 06:44:42.784720] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.431 [2024-04-17 06:44:42.793778] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.431 [2024-04-17 06:44:42.793805] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.431 [2024-04-17 06:44:42.809625] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.431 [2024-04-17 06:44:42.809655] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.431 [2024-04-17 06:44:42.819947] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.431 [2024-04-17 06:44:42.819974] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.431 [2024-04-17 06:44:42.831324] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.431 [2024-04-17 06:44:42.831351] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.431 [2024-04-17 06:44:42.841989] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.431 [2024-04-17 06:44:42.842017] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.431 [2024-04-17 06:44:42.852751] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.431 [2024-04-17 06:44:42.852790] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.431 [2024-04-17 06:44:42.863789] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.431 [2024-04-17 06:44:42.863817] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.431 [2024-04-17 06:44:42.876486] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.431 [2024-04-17 06:44:42.876513] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.431 [2024-04-17 06:44:42.886041] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.431 [2024-04-17 06:44:42.886068] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.431 [2024-04-17 06:44:42.897071] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.431 [2024-04-17 06:44:42.897098] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.431 [2024-04-17 06:44:42.907438] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.431 [2024-04-17 06:44:42.907465] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.431 [2024-04-17 06:44:42.918333] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.431 [2024-04-17 06:44:42.918360] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.431 [2024-04-17 06:44:42.929405] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.431 [2024-04-17 06:44:42.929431] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.431 [2024-04-17 06:44:42.940429] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.431 [2024-04-17 06:44:42.940464] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.431 [2024-04-17 06:44:42.951379] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.431 [2024-04-17 06:44:42.951407] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.431 [2024-04-17 06:44:42.962299] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.431 [2024-04-17 06:44:42.962326] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.431 [2024-04-17 06:44:42.973365] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.431 [2024-04-17 06:44:42.973392] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.431 [2024-04-17 06:44:42.984301] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.431 [2024-04-17 06:44:42.984328] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.431 [2024-04-17 06:44:42.996928] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.431 [2024-04-17 06:44:42.996955] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.431 [2024-04-17 06:44:43.006296] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.431 [2024-04-17 06:44:43.006323] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.431 [2024-04-17 06:44:43.017782] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.431 [2024-04-17 06:44:43.017809] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.431 [2024-04-17 06:44:43.028612] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.431 [2024-04-17 06:44:43.028639] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.689 [2024-04-17 06:44:43.039669] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.689 [2024-04-17 06:44:43.039697] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.689 [2024-04-17 06:44:43.051007] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.689 [2024-04-17 06:44:43.051034] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.689 [2024-04-17 06:44:43.062459] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.689 [2024-04-17 06:44:43.062486] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.689 [2024-04-17 06:44:43.073656] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.689 [2024-04-17 06:44:43.073686] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.689 [2024-04-17 06:44:43.084770] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.689 [2024-04-17 06:44:43.084798] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.689 [2024-04-17 06:44:43.096652] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.689 [2024-04-17 06:44:43.096682] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.689 [2024-04-17 06:44:43.108700] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.689 [2024-04-17 06:44:43.108729] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.689 [2024-04-17 06:44:43.120547] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.689 [2024-04-17 06:44:43.120578] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.689 [2024-04-17 06:44:43.132526] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.689 [2024-04-17 06:44:43.132556] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.689 [2024-04-17 06:44:43.144253] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.689 [2024-04-17 06:44:43.144280] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.689 [2024-04-17 06:44:43.155983] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.689 [2024-04-17 06:44:43.156023] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.689 [2024-04-17 06:44:43.167204] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.689 [2024-04-17 06:44:43.167248] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.689 [2024-04-17 06:44:43.178761] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.689 [2024-04-17 06:44:43.178789] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.689 [2024-04-17 06:44:43.190489] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.689 [2024-04-17 06:44:43.190516] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.689 [2024-04-17 06:44:43.202247] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.689 [2024-04-17 06:44:43.202274] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.689 [2024-04-17 06:44:43.213927] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.689 [2024-04-17 06:44:43.213958] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.689 [2024-04-17 06:44:43.225831] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.689 [2024-04-17 06:44:43.225861] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.689 [2024-04-17 06:44:43.237755] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.689 [2024-04-17 06:44:43.237785] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.689 [2024-04-17 06:44:43.251153] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.689 [2024-04-17 06:44:43.251191] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.689 [2024-04-17 06:44:43.262084] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.689 [2024-04-17 06:44:43.262114] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.689 [2024-04-17 06:44:43.273580] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.689 [2024-04-17 06:44:43.273610] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.689 [2024-04-17 06:44:43.284747] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.689 [2024-04-17 06:44:43.284777] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.948 [2024-04-17 06:44:43.296341] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.948 [2024-04-17 06:44:43.296368] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.948 [2024-04-17 06:44:43.307929] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.948 [2024-04-17 06:44:43.307959] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.948 [2024-04-17 06:44:43.319124] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.948 [2024-04-17 06:44:43.319155] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.948 [2024-04-17 06:44:43.330636] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.948 [2024-04-17 06:44:43.330666] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.948 [2024-04-17 06:44:43.341750] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.948 [2024-04-17 06:44:43.341780] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.948 [2024-04-17 06:44:43.354833] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.948 [2024-04-17 06:44:43.354863] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.949 [2024-04-17 06:44:43.365280] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.949 [2024-04-17 06:44:43.365318] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.949 [2024-04-17 06:44:43.375911] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.949 [2024-04-17 06:44:43.375941] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.949 00:18:38.949 Latency(us) 00:18:38.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.949 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:38.949 Nvme1n1 : 5.01 11309.48 88.36 0.00 0.00 11301.58 5000.15 25243.50 00:18:38.949 =================================================================================================================== 00:18:38.949 Total : 11309.48 88.36 0.00 0.00 11301.58 5000.15 25243.50 00:18:38.949 [2024-04-17 06:44:43.384078] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.949 [2024-04-17 06:44:43.384107] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.949 [2024-04-17 06:44:43.392036] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.949 [2024-04-17 06:44:43.392064] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.949 [2024-04-17 06:44:43.400058] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.949 [2024-04-17 06:44:43.400091] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.949 [2024-04-17 06:44:43.408116] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.949 [2024-04-17 06:44:43.408165] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.949 [2024-04-17 06:44:43.416133] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.949 [2024-04-17 06:44:43.416187] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.949 [2024-04-17 06:44:43.424154] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.949 [2024-04-17 06:44:43.424211] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.949 [2024-04-17 06:44:43.432187] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.949 [2024-04-17 06:44:43.432232] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.949 [2024-04-17 06:44:43.440210] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.949 [2024-04-17 06:44:43.440258] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.949 [2024-04-17 06:44:43.448230] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.949 [2024-04-17 06:44:43.448275] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.949 [2024-04-17 06:44:43.456248] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.949 [2024-04-17 06:44:43.456294] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.949 [2024-04-17 06:44:43.464272] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.949 [2024-04-17 06:44:43.464318] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.949 [2024-04-17 06:44:43.472291] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.949 [2024-04-17 06:44:43.472341] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.949 [2024-04-17 06:44:43.480312] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.949 [2024-04-17 06:44:43.480363] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.949 [2024-04-17 06:44:43.488335] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.949 [2024-04-17 06:44:43.488383] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.949 [2024-04-17 06:44:43.496348] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.949 [2024-04-17 06:44:43.496396] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.949 [2024-04-17 06:44:43.504369] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.949 [2024-04-17 06:44:43.504416] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.949 [2024-04-17 06:44:43.512396] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.949 [2024-04-17 06:44:43.512444] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.949 [2024-04-17 06:44:43.520401] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.949 [2024-04-17 06:44:43.520444] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.949 [2024-04-17 06:44:43.528389] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.949 [2024-04-17 06:44:43.528414] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.949 [2024-04-17 06:44:43.536436] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.949 [2024-04-17 06:44:43.536473] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.949 [2024-04-17 06:44:43.544493] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.949 [2024-04-17 06:44:43.544541] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.949 [2024-04-17 06:44:43.552507] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.949 [2024-04-17 06:44:43.552555] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.207 [2024-04-17 06:44:43.560515] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.207 [2024-04-17 06:44:43.560566] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.207 [2024-04-17 06:44:43.568499] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.207 [2024-04-17 06:44:43.568526] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.207 [2024-04-17 06:44:43.576563] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.207 [2024-04-17 06:44:43.576610] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.207 [2024-04-17 06:44:43.584592] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.207 [2024-04-17 06:44:43.584639] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.207 [2024-04-17 06:44:43.592605] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.207 [2024-04-17 06:44:43.592639] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.207 [2024-04-17 06:44:43.600607] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.207 [2024-04-17 06:44:43.600631] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.207 [2024-04-17 06:44:43.608627] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.207 [2024-04-17 06:44:43.608651] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (4187205) - No such process 00:18:39.207 06:44:43 -- target/zcopy.sh@49 -- # wait 4187205 00:18:39.207 06:44:43 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:39.207 06:44:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:39.207 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:18:39.207 06:44:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:39.207 06:44:43 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:39.207 06:44:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:39.207 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:18:39.207 delay0 00:18:39.207 06:44:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:39.207 06:44:43 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:39.207 06:44:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:39.207 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:18:39.207 06:44:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:39.207 06:44:43 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:39.207 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.207 [2024-04-17 06:44:43.729448] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:47.317 Initializing NVMe Controllers 00:18:47.317 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:47.317 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:47.317 Initialization complete. Launching workers. 00:18:47.317 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 268, failed: 14007 00:18:47.317 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 14174, failed to submit 101 00:18:47.317 success 14065, unsuccess 109, failed 0 00:18:47.317 06:44:50 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:47.317 06:44:50 -- target/zcopy.sh@60 -- # nvmftestfini 00:18:47.317 06:44:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:47.317 06:44:50 -- nvmf/common.sh@117 -- # sync 00:18:47.317 06:44:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:47.317 06:44:50 -- nvmf/common.sh@120 -- # set +e 00:18:47.317 06:44:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:47.317 06:44:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:47.317 rmmod nvme_tcp 00:18:47.317 rmmod nvme_fabrics 00:18:47.317 rmmod nvme_keyring 00:18:47.317 06:44:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:47.317 06:44:50 -- nvmf/common.sh@124 -- # set -e 00:18:47.317 06:44:50 -- nvmf/common.sh@125 -- # return 0 00:18:47.317 06:44:50 -- nvmf/common.sh@478 -- # '[' -n 4185870 ']' 00:18:47.317 06:44:50 -- nvmf/common.sh@479 -- # killprocess 4185870 00:18:47.317 06:44:50 -- common/autotest_common.sh@936 -- # '[' -z 4185870 ']' 00:18:47.317 06:44:50 -- common/autotest_common.sh@940 -- # kill -0 4185870 00:18:47.317 06:44:50 -- common/autotest_common.sh@941 -- # uname 00:18:47.317 06:44:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:47.317 06:44:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4185870 00:18:47.317 06:44:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:47.317 06:44:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:47.317 06:44:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4185870' 00:18:47.317 killing process with pid 4185870 00:18:47.317 06:44:50 -- common/autotest_common.sh@955 -- # kill 4185870 00:18:47.317 06:44:50 -- common/autotest_common.sh@960 -- # wait 4185870 00:18:47.317 06:44:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:47.317 06:44:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:47.317 06:44:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:47.317 06:44:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:47.317 06:44:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:47.317 06:44:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.317 06:44:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:47.317 06:44:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.695 06:44:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:48.695 00:18:48.695 real 0m28.894s 00:18:48.695 user 0m41.345s 00:18:48.695 sys 0m10.035s 00:18:48.695 06:44:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:48.695 06:44:53 -- common/autotest_common.sh@10 -- # set +x 00:18:48.695 ************************************ 00:18:48.695 END TEST nvmf_zcopy 00:18:48.695 ************************************ 00:18:48.695 06:44:53 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:48.695 06:44:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:48.695 06:44:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:48.695 06:44:53 -- common/autotest_common.sh@10 -- # set +x 00:18:48.953 ************************************ 00:18:48.953 START TEST nvmf_nmic 00:18:48.953 ************************************ 00:18:48.953 06:44:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:48.953 * Looking for test storage... 00:18:48.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:48.953 06:44:53 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:48.953 06:44:53 -- nvmf/common.sh@7 -- # uname -s 00:18:48.953 06:44:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:48.953 06:44:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:48.953 06:44:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:48.953 06:44:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:48.953 06:44:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:48.953 06:44:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:48.953 06:44:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:48.953 06:44:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:48.953 06:44:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:48.953 06:44:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:48.953 06:44:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:48.953 06:44:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:48.953 06:44:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:48.953 06:44:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:48.953 06:44:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:48.953 06:44:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:48.953 06:44:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:48.953 06:44:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:48.953 06:44:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:48.953 06:44:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:48.954 06:44:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.954 06:44:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.954 06:44:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.954 06:44:53 -- paths/export.sh@5 -- # export PATH 00:18:48.954 06:44:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.954 06:44:53 -- nvmf/common.sh@47 -- # : 0 00:18:48.954 06:44:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:48.954 06:44:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:48.954 06:44:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:48.954 06:44:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:48.954 06:44:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:48.954 06:44:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:48.954 06:44:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:48.954 06:44:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:48.954 06:44:53 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:48.954 06:44:53 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:48.954 06:44:53 -- target/nmic.sh@14 -- # nvmftestinit 00:18:48.954 06:44:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:48.954 06:44:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:48.954 06:44:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:48.954 06:44:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:48.954 06:44:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:48.954 06:44:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.954 06:44:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:48.954 06:44:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.954 06:44:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:48.954 06:44:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:48.954 06:44:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:48.954 06:44:53 -- common/autotest_common.sh@10 -- # set +x 00:18:50.854 06:44:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:50.854 06:44:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:50.854 06:44:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:50.854 06:44:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:50.854 06:44:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:50.854 06:44:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:50.854 06:44:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:50.854 06:44:55 -- nvmf/common.sh@295 -- # net_devs=() 00:18:50.854 06:44:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:50.854 06:44:55 -- nvmf/common.sh@296 -- # e810=() 00:18:50.854 06:44:55 -- nvmf/common.sh@296 -- # local -ga e810 00:18:50.854 06:44:55 -- nvmf/common.sh@297 -- # x722=() 00:18:50.854 06:44:55 -- nvmf/common.sh@297 -- # local -ga x722 00:18:50.854 06:44:55 -- nvmf/common.sh@298 -- # mlx=() 00:18:50.854 06:44:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:50.854 06:44:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:50.854 06:44:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:50.854 06:44:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:50.854 06:44:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:50.854 06:44:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:50.854 06:44:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:50.854 06:44:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:50.854 06:44:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:50.854 06:44:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:50.854 06:44:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:50.854 06:44:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:50.854 06:44:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:50.854 06:44:55 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:50.854 06:44:55 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:50.854 06:44:55 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:50.854 06:44:55 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:50.854 06:44:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:50.854 06:44:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:50.854 06:44:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:50.854 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:50.854 06:44:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:50.854 06:44:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:50.854 06:44:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.854 06:44:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.854 06:44:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:50.854 06:44:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:50.854 06:44:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:50.854 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:50.854 06:44:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:50.854 06:44:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:50.854 06:44:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.854 06:44:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.854 06:44:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:50.854 06:44:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:50.854 06:44:55 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:50.854 06:44:55 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:50.854 06:44:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:50.854 06:44:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.854 06:44:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:50.854 06:44:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.854 06:44:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:50.854 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:50.854 06:44:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.854 06:44:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:50.854 06:44:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.854 06:44:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:50.854 06:44:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.854 06:44:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:50.854 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:50.854 06:44:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.854 06:44:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:50.854 06:44:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:50.854 06:44:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:50.854 06:44:55 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:50.854 06:44:55 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:50.854 06:44:55 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:50.854 06:44:55 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:50.854 06:44:55 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:50.854 06:44:55 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:50.854 06:44:55 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:50.854 06:44:55 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:50.854 06:44:55 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:50.854 06:44:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:50.854 06:44:55 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:50.854 06:44:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:50.854 06:44:55 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:50.854 06:44:55 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:50.854 06:44:55 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:50.854 06:44:55 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:50.854 06:44:55 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:50.854 06:44:55 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:50.854 06:44:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:51.112 06:44:55 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:51.112 06:44:55 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:51.112 06:44:55 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:51.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:51.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:18:51.112 00:18:51.112 --- 10.0.0.2 ping statistics --- 00:18:51.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.112 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:18:51.112 06:44:55 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:51.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:51.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:18:51.112 00:18:51.112 --- 10.0.0.1 ping statistics --- 00:18:51.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.112 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:18:51.112 06:44:55 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:51.112 06:44:55 -- nvmf/common.sh@411 -- # return 0 00:18:51.112 06:44:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:51.113 06:44:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:51.113 06:44:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:51.113 06:44:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:51.113 06:44:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:51.113 06:44:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:51.113 06:44:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:51.113 06:44:55 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:51.113 06:44:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:51.113 06:44:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:51.113 06:44:55 -- common/autotest_common.sh@10 -- # set +x 00:18:51.113 06:44:55 -- nvmf/common.sh@470 -- # nvmfpid=4190720 00:18:51.113 06:44:55 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:51.113 06:44:55 -- nvmf/common.sh@471 -- # waitforlisten 4190720 00:18:51.113 06:44:55 -- common/autotest_common.sh@817 -- # '[' -z 4190720 ']' 00:18:51.113 06:44:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.113 06:44:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:51.113 06:44:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.113 06:44:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:51.113 06:44:55 -- common/autotest_common.sh@10 -- # set +x 00:18:51.113 [2024-04-17 06:44:55.592807] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:18:51.113 [2024-04-17 06:44:55.592885] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:51.113 EAL: No free 2048 kB hugepages reported on node 1 00:18:51.113 [2024-04-17 06:44:55.656998] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:51.372 [2024-04-17 06:44:55.744392] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:51.372 [2024-04-17 06:44:55.744442] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:51.372 [2024-04-17 06:44:55.744472] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:51.372 [2024-04-17 06:44:55.744483] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:51.372 [2024-04-17 06:44:55.744493] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:51.372 [2024-04-17 06:44:55.744557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.372 [2024-04-17 06:44:55.744893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.372 [2024-04-17 06:44:55.744955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:51.372 [2024-04-17 06:44:55.744958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.372 06:44:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:51.372 06:44:55 -- common/autotest_common.sh@850 -- # return 0 00:18:51.372 06:44:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:51.372 06:44:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:51.372 06:44:55 -- common/autotest_common.sh@10 -- # set +x 00:18:51.372 06:44:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.372 06:44:55 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:51.372 06:44:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.372 06:44:55 -- common/autotest_common.sh@10 -- # set +x 00:18:51.372 [2024-04-17 06:44:55.897857] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:51.372 06:44:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.372 06:44:55 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:51.372 06:44:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.372 06:44:55 -- common/autotest_common.sh@10 -- # set +x 00:18:51.372 Malloc0 00:18:51.372 06:44:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.372 06:44:55 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:51.372 06:44:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.372 06:44:55 -- common/autotest_common.sh@10 -- # set +x 00:18:51.372 06:44:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.372 06:44:55 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:51.372 06:44:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.372 06:44:55 -- common/autotest_common.sh@10 -- # set +x 00:18:51.372 06:44:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.372 06:44:55 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:51.372 06:44:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.372 06:44:55 -- common/autotest_common.sh@10 -- # set +x 00:18:51.372 [2024-04-17 06:44:55.950890] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.372 06:44:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.372 06:44:55 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:51.372 test case1: single bdev can't be used in multiple subsystems 00:18:51.372 06:44:55 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:51.372 06:44:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.372 06:44:55 -- common/autotest_common.sh@10 -- # set +x 00:18:51.372 06:44:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.372 06:44:55 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:51.372 06:44:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.372 06:44:55 -- common/autotest_common.sh@10 -- # set +x 00:18:51.372 06:44:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.372 06:44:55 -- target/nmic.sh@28 -- # nmic_status=0 00:18:51.372 06:44:55 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:51.372 06:44:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.372 06:44:55 -- common/autotest_common.sh@10 -- # set +x 00:18:51.372 [2024-04-17 06:44:55.974762] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:51.372 [2024-04-17 06:44:55.974818] subsystem.c:1930:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:51.372 [2024-04-17 06:44:55.974837] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.372 request: 00:18:51.372 { 00:18:51.372 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:51.372 "namespace": { 00:18:51.372 "bdev_name": "Malloc0", 00:18:51.372 "no_auto_visible": false 00:18:51.373 }, 00:18:51.631 "method": "nvmf_subsystem_add_ns", 00:18:51.631 "req_id": 1 00:18:51.631 } 00:18:51.631 Got JSON-RPC error response 00:18:51.631 response: 00:18:51.631 { 00:18:51.631 "code": -32602, 00:18:51.631 "message": "Invalid parameters" 00:18:51.631 } 00:18:51.631 06:44:55 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:18:51.631 06:44:55 -- target/nmic.sh@29 -- # nmic_status=1 00:18:51.631 06:44:55 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:51.631 06:44:55 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:51.631 Adding namespace failed - expected result. 00:18:51.631 06:44:55 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:51.631 test case2: host connect to nvmf target in multiple paths 00:18:51.631 06:44:55 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:51.631 06:44:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.631 06:44:55 -- common/autotest_common.sh@10 -- # set +x 00:18:51.631 [2024-04-17 06:44:55.982878] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:51.631 06:44:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.631 06:44:55 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:52.196 06:44:56 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:52.761 06:44:57 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:52.761 06:44:57 -- common/autotest_common.sh@1184 -- # local i=0 00:18:52.761 06:44:57 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:52.761 06:44:57 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:18:52.761 06:44:57 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:54.658 06:44:59 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:54.658 06:44:59 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:54.658 06:44:59 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:18:54.658 06:44:59 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:54.658 06:44:59 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:54.658 06:44:59 -- common/autotest_common.sh@1194 -- # return 0 00:18:54.658 06:44:59 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:54.658 [global] 00:18:54.658 thread=1 00:18:54.658 invalidate=1 00:18:54.658 rw=write 00:18:54.658 time_based=1 00:18:54.658 runtime=1 00:18:54.658 ioengine=libaio 00:18:54.658 direct=1 00:18:54.658 bs=4096 00:18:54.658 iodepth=1 00:18:54.658 norandommap=0 00:18:54.658 numjobs=1 00:18:54.658 00:18:54.658 verify_dump=1 00:18:54.658 verify_backlog=512 00:18:54.658 verify_state_save=0 00:18:54.658 do_verify=1 00:18:54.658 verify=crc32c-intel 00:18:54.658 [job0] 00:18:54.658 filename=/dev/nvme0n1 00:18:54.658 Could not set queue depth (nvme0n1) 00:18:54.916 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:54.916 fio-3.35 00:18:54.916 Starting 1 thread 00:18:56.290 00:18:56.290 job0: (groupid=0, jobs=1): err= 0: pid=4191237: Wed Apr 17 06:45:00 2024 00:18:56.290 read: IOPS=1503, BW=6014KiB/s (6158kB/s)(6020KiB/1001msec) 00:18:56.290 slat (nsec): min=5750, max=56622, avg=15373.57, stdev=10621.90 00:18:56.290 clat (usec): min=269, max=1783, avg=374.33, stdev=72.89 00:18:56.290 lat (usec): min=281, max=1793, avg=389.70, stdev=76.90 00:18:56.290 clat percentiles (usec): 00:18:56.290 | 1.00th=[ 281], 5.00th=[ 302], 10.00th=[ 330], 20.00th=[ 343], 00:18:56.290 | 30.00th=[ 351], 40.00th=[ 355], 50.00th=[ 359], 60.00th=[ 367], 00:18:56.290 | 70.00th=[ 375], 80.00th=[ 388], 90.00th=[ 433], 95.00th=[ 515], 00:18:56.290 | 99.00th=[ 570], 99.50th=[ 586], 99.90th=[ 1004], 99.95th=[ 1778], 00:18:56.290 | 99.99th=[ 1778] 00:18:56.290 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:18:56.290 slat (usec): min=7, max=30992, avg=35.76, stdev=790.44 00:18:56.290 clat (usec): min=168, max=1382, avg=225.01, stdev=61.21 00:18:56.290 lat (usec): min=177, max=31518, avg=260.77, stdev=800.87 00:18:56.290 clat percentiles (usec): 00:18:56.290 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:18:56.290 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 217], 00:18:56.290 | 70.00th=[ 233], 80.00th=[ 260], 90.00th=[ 297], 95.00th=[ 330], 00:18:56.290 | 99.00th=[ 383], 99.50th=[ 424], 99.90th=[ 758], 99.95th=[ 1385], 00:18:56.290 | 99.99th=[ 1385] 00:18:56.290 bw ( KiB/s): min= 7776, max= 7776, per=100.00%, avg=7776.00, stdev= 0.00, samples=1 00:18:56.290 iops : min= 1944, max= 1944, avg=1944.00, stdev= 0.00, samples=1 00:18:56.290 lat (usec) : 250=39.99%, 500=56.92%, 750=2.83%, 1000=0.16% 00:18:56.290 lat (msec) : 2=0.10% 00:18:56.290 cpu : usr=2.50%, sys=5.70%, ctx=3043, majf=0, minf=2 00:18:56.290 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:56.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.290 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.290 issued rwts: total=1505,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.290 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:56.290 00:18:56.290 Run status group 0 (all jobs): 00:18:56.290 READ: bw=6014KiB/s (6158kB/s), 6014KiB/s-6014KiB/s (6158kB/s-6158kB/s), io=6020KiB (6164kB), run=1001-1001msec 00:18:56.290 WRITE: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:18:56.290 00:18:56.290 Disk stats (read/write): 00:18:56.290 nvme0n1: ios=1276/1536, merge=0/0, ticks=680/341, in_queue=1021, util=98.90% 00:18:56.290 06:45:00 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:56.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:56.290 06:45:00 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:56.290 06:45:00 -- common/autotest_common.sh@1205 -- # local i=0 00:18:56.290 06:45:00 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:56.290 06:45:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:56.290 06:45:00 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:56.290 06:45:00 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:56.290 06:45:00 -- common/autotest_common.sh@1217 -- # return 0 00:18:56.290 06:45:00 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:56.290 06:45:00 -- target/nmic.sh@53 -- # nvmftestfini 00:18:56.290 06:45:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:56.290 06:45:00 -- nvmf/common.sh@117 -- # sync 00:18:56.290 06:45:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:56.290 06:45:00 -- nvmf/common.sh@120 -- # set +e 00:18:56.290 06:45:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:56.290 06:45:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:56.290 rmmod nvme_tcp 00:18:56.290 rmmod nvme_fabrics 00:18:56.290 rmmod nvme_keyring 00:18:56.290 06:45:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:56.290 06:45:00 -- nvmf/common.sh@124 -- # set -e 00:18:56.290 06:45:00 -- nvmf/common.sh@125 -- # return 0 00:18:56.290 06:45:00 -- nvmf/common.sh@478 -- # '[' -n 4190720 ']' 00:18:56.290 06:45:00 -- nvmf/common.sh@479 -- # killprocess 4190720 00:18:56.290 06:45:00 -- common/autotest_common.sh@936 -- # '[' -z 4190720 ']' 00:18:56.290 06:45:00 -- common/autotest_common.sh@940 -- # kill -0 4190720 00:18:56.290 06:45:00 -- common/autotest_common.sh@941 -- # uname 00:18:56.290 06:45:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:56.290 06:45:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4190720 00:18:56.290 06:45:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:56.290 06:45:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:56.290 06:45:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4190720' 00:18:56.290 killing process with pid 4190720 00:18:56.290 06:45:00 -- common/autotest_common.sh@955 -- # kill 4190720 00:18:56.290 06:45:00 -- common/autotest_common.sh@960 -- # wait 4190720 00:18:56.548 06:45:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:56.548 06:45:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:56.548 06:45:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:56.548 06:45:00 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:56.548 06:45:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:56.548 06:45:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.548 06:45:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:56.548 06:45:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.450 06:45:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:58.450 00:18:58.450 real 0m9.647s 00:18:58.450 user 0m21.616s 00:18:58.450 sys 0m2.309s 00:18:58.450 06:45:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:58.450 06:45:03 -- common/autotest_common.sh@10 -- # set +x 00:18:58.450 ************************************ 00:18:58.450 END TEST nvmf_nmic 00:18:58.450 ************************************ 00:18:58.709 06:45:03 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:58.709 06:45:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:58.709 06:45:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:58.709 06:45:03 -- common/autotest_common.sh@10 -- # set +x 00:18:58.709 ************************************ 00:18:58.709 START TEST nvmf_fio_target 00:18:58.709 ************************************ 00:18:58.709 06:45:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:58.709 * Looking for test storage... 00:18:58.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:58.709 06:45:03 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:58.709 06:45:03 -- nvmf/common.sh@7 -- # uname -s 00:18:58.709 06:45:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:58.709 06:45:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:58.709 06:45:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:58.709 06:45:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:58.709 06:45:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:58.709 06:45:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:58.709 06:45:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:58.709 06:45:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:58.709 06:45:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:58.709 06:45:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:58.709 06:45:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:58.709 06:45:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:58.709 06:45:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:58.709 06:45:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:58.709 06:45:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:58.709 06:45:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:58.709 06:45:03 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:58.709 06:45:03 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:58.709 06:45:03 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:58.709 06:45:03 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:58.709 06:45:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.709 06:45:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.709 06:45:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.709 06:45:03 -- paths/export.sh@5 -- # export PATH 00:18:58.709 06:45:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.709 06:45:03 -- nvmf/common.sh@47 -- # : 0 00:18:58.709 06:45:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:58.709 06:45:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:58.710 06:45:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:58.710 06:45:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:58.710 06:45:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:58.710 06:45:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:58.710 06:45:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:58.710 06:45:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:58.710 06:45:03 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:58.710 06:45:03 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:58.710 06:45:03 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:58.710 06:45:03 -- target/fio.sh@16 -- # nvmftestinit 00:18:58.710 06:45:03 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:58.710 06:45:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:58.710 06:45:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:58.710 06:45:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:58.710 06:45:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:58.710 06:45:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.710 06:45:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:58.710 06:45:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.710 06:45:03 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:58.710 06:45:03 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:58.710 06:45:03 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:58.710 06:45:03 -- common/autotest_common.sh@10 -- # set +x 00:19:01.235 06:45:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:01.235 06:45:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:01.235 06:45:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:01.235 06:45:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:01.235 06:45:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:01.235 06:45:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:01.235 06:45:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:01.235 06:45:05 -- nvmf/common.sh@295 -- # net_devs=() 00:19:01.235 06:45:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:01.235 06:45:05 -- nvmf/common.sh@296 -- # e810=() 00:19:01.235 06:45:05 -- nvmf/common.sh@296 -- # local -ga e810 00:19:01.235 06:45:05 -- nvmf/common.sh@297 -- # x722=() 00:19:01.235 06:45:05 -- nvmf/common.sh@297 -- # local -ga x722 00:19:01.235 06:45:05 -- nvmf/common.sh@298 -- # mlx=() 00:19:01.235 06:45:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:01.235 06:45:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:01.235 06:45:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:01.235 06:45:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:01.235 06:45:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:01.235 06:45:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:01.235 06:45:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:01.235 06:45:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:01.235 06:45:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:01.235 06:45:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:01.235 06:45:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:01.235 06:45:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:01.235 06:45:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:01.235 06:45:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:01.235 06:45:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:01.235 06:45:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:01.235 06:45:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:01.235 06:45:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:01.235 06:45:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:01.235 06:45:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:01.235 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:01.235 06:45:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:01.235 06:45:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:01.235 06:45:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.235 06:45:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.235 06:45:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:01.235 06:45:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:01.235 06:45:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:01.235 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:01.235 06:45:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:01.235 06:45:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:01.235 06:45:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.235 06:45:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.235 06:45:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:01.235 06:45:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:01.235 06:45:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:01.235 06:45:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:01.235 06:45:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:01.235 06:45:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.235 06:45:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:01.235 06:45:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.235 06:45:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:01.235 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:01.235 06:45:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.235 06:45:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:01.235 06:45:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.235 06:45:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:01.235 06:45:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.235 06:45:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:01.235 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:01.235 06:45:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.235 06:45:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:01.235 06:45:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:01.235 06:45:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:01.235 06:45:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:01.235 06:45:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:01.235 06:45:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:01.235 06:45:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:01.235 06:45:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:01.235 06:45:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:01.235 06:45:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:01.235 06:45:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:01.235 06:45:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:01.235 06:45:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:01.235 06:45:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:01.235 06:45:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:01.235 06:45:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:01.235 06:45:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:01.235 06:45:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:01.235 06:45:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:01.235 06:45:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:01.235 06:45:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:01.235 06:45:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:01.235 06:45:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:01.235 06:45:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:01.235 06:45:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:01.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:01.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:19:01.235 00:19:01.235 --- 10.0.0.2 ping statistics --- 00:19:01.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.235 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:19:01.235 06:45:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:01.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:01.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:19:01.235 00:19:01.235 --- 10.0.0.1 ping statistics --- 00:19:01.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.235 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:19:01.235 06:45:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:01.235 06:45:05 -- nvmf/common.sh@411 -- # return 0 00:19:01.235 06:45:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:01.235 06:45:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:01.235 06:45:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:01.235 06:45:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:01.236 06:45:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:01.236 06:45:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:01.236 06:45:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:01.236 06:45:05 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:01.236 06:45:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:01.236 06:45:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:01.236 06:45:05 -- common/autotest_common.sh@10 -- # set +x 00:19:01.236 06:45:05 -- nvmf/common.sh@470 -- # nvmfpid=4193433 00:19:01.236 06:45:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:01.236 06:45:05 -- nvmf/common.sh@471 -- # waitforlisten 4193433 00:19:01.236 06:45:05 -- common/autotest_common.sh@817 -- # '[' -z 4193433 ']' 00:19:01.236 06:45:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.236 06:45:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:01.236 06:45:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.236 06:45:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:01.236 06:45:05 -- common/autotest_common.sh@10 -- # set +x 00:19:01.236 [2024-04-17 06:45:05.484912] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:19:01.236 [2024-04-17 06:45:05.484992] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.236 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.236 [2024-04-17 06:45:05.551634] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:01.236 [2024-04-17 06:45:05.648344] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.236 [2024-04-17 06:45:05.648400] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.236 [2024-04-17 06:45:05.648417] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:01.236 [2024-04-17 06:45:05.648430] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:01.236 [2024-04-17 06:45:05.648441] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.236 [2024-04-17 06:45:05.648509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.236 [2024-04-17 06:45:05.648540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.236 [2024-04-17 06:45:05.648908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:01.236 [2024-04-17 06:45:05.648913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.236 06:45:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:01.236 06:45:05 -- common/autotest_common.sh@850 -- # return 0 00:19:01.236 06:45:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:01.236 06:45:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:01.236 06:45:05 -- common/autotest_common.sh@10 -- # set +x 00:19:01.236 06:45:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.236 06:45:05 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:01.493 [2024-04-17 06:45:06.026719] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.493 06:45:06 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:01.751 06:45:06 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:01.751 06:45:06 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:02.009 06:45:06 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:02.009 06:45:06 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:02.267 06:45:06 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:02.267 06:45:06 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:02.526 06:45:07 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:02.526 06:45:07 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:02.784 06:45:07 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:03.350 06:45:07 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:03.350 06:45:07 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:03.350 06:45:07 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:03.350 06:45:07 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:03.608 06:45:08 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:03.608 06:45:08 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:03.866 06:45:08 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:04.124 06:45:08 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:04.124 06:45:08 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:04.381 06:45:08 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:04.381 06:45:08 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:04.639 06:45:09 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:04.897 [2024-04-17 06:45:09.432060] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.897 06:45:09 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:05.154 06:45:09 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:05.412 06:45:09 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:05.978 06:45:10 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:05.978 06:45:10 -- common/autotest_common.sh@1184 -- # local i=0 00:19:05.978 06:45:10 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:05.978 06:45:10 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:19:05.978 06:45:10 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:19:05.978 06:45:10 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:08.534 06:45:12 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:08.534 06:45:12 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:08.534 06:45:12 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:19:08.534 06:45:12 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:19:08.534 06:45:12 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:08.534 06:45:12 -- common/autotest_common.sh@1194 -- # return 0 00:19:08.534 06:45:12 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:08.534 [global] 00:19:08.534 thread=1 00:19:08.534 invalidate=1 00:19:08.534 rw=write 00:19:08.534 time_based=1 00:19:08.534 runtime=1 00:19:08.534 ioengine=libaio 00:19:08.534 direct=1 00:19:08.534 bs=4096 00:19:08.534 iodepth=1 00:19:08.534 norandommap=0 00:19:08.534 numjobs=1 00:19:08.534 00:19:08.534 verify_dump=1 00:19:08.534 verify_backlog=512 00:19:08.534 verify_state_save=0 00:19:08.534 do_verify=1 00:19:08.534 verify=crc32c-intel 00:19:08.534 [job0] 00:19:08.534 filename=/dev/nvme0n1 00:19:08.534 [job1] 00:19:08.534 filename=/dev/nvme0n2 00:19:08.534 [job2] 00:19:08.534 filename=/dev/nvme0n3 00:19:08.534 [job3] 00:19:08.534 filename=/dev/nvme0n4 00:19:08.534 Could not set queue depth (nvme0n1) 00:19:08.534 Could not set queue depth (nvme0n2) 00:19:08.534 Could not set queue depth (nvme0n3) 00:19:08.534 Could not set queue depth (nvme0n4) 00:19:08.534 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:08.534 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:08.534 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:08.534 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:08.534 fio-3.35 00:19:08.534 Starting 4 threads 00:19:09.468 00:19:09.468 job0: (groupid=0, jobs=1): err= 0: pid=1071: Wed Apr 17 06:45:14 2024 00:19:09.468 read: IOPS=16, BW=66.5KiB/s (68.1kB/s)(68.0KiB/1022msec) 00:19:09.468 slat (nsec): min=8285, max=34641, avg=19949.47, stdev=9815.61 00:19:09.468 clat (usec): min=40698, max=42638, avg=41483.75, stdev=642.59 00:19:09.468 lat (usec): min=40712, max=42652, avg=41503.70, stdev=641.71 00:19:09.468 clat percentiles (usec): 00:19:09.468 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:09.468 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[42206], 00:19:09.468 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:19:09.468 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:19:09.468 | 99.99th=[42730] 00:19:09.468 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:19:09.468 slat (nsec): min=9821, max=67832, avg=26039.65, stdev=11938.21 00:19:09.468 clat (usec): min=233, max=2165, avg=585.59, stdev=189.93 00:19:09.468 lat (usec): min=248, max=2196, avg=611.63, stdev=187.31 00:19:09.468 clat percentiles (usec): 00:19:09.468 | 1.00th=[ 285], 5.00th=[ 347], 10.00th=[ 408], 20.00th=[ 453], 00:19:09.468 | 30.00th=[ 478], 40.00th=[ 523], 50.00th=[ 570], 60.00th=[ 603], 00:19:09.468 | 70.00th=[ 644], 80.00th=[ 701], 90.00th=[ 783], 95.00th=[ 865], 00:19:09.468 | 99.00th=[ 1139], 99.50th=[ 1795], 99.90th=[ 2180], 99.95th=[ 2180], 00:19:09.469 | 99.99th=[ 2180] 00:19:09.469 bw ( KiB/s): min= 4096, max= 4096, per=29.74%, avg=4096.00, stdev= 0.00, samples=1 00:19:09.469 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:09.469 lat (usec) : 250=0.19%, 500=34.59%, 750=49.72%, 1000=10.96% 00:19:09.469 lat (msec) : 2=0.95%, 4=0.38%, 50=3.21% 00:19:09.469 cpu : usr=0.78%, sys=1.67%, ctx=530, majf=0, minf=1 00:19:09.469 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:09.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.469 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.469 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.469 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:09.469 job1: (groupid=0, jobs=1): err= 0: pid=1072: Wed Apr 17 06:45:14 2024 00:19:09.469 read: IOPS=1489, BW=5958KiB/s (6101kB/s)(5964KiB/1001msec) 00:19:09.469 slat (nsec): min=5605, max=33539, avg=10812.25, stdev=5303.95 00:19:09.469 clat (usec): min=321, max=3035, avg=406.10, stdev=82.10 00:19:09.469 lat (usec): min=328, max=3041, avg=416.91, stdev=82.47 00:19:09.469 clat percentiles (usec): 00:19:09.469 | 1.00th=[ 338], 5.00th=[ 351], 10.00th=[ 359], 20.00th=[ 371], 00:19:09.469 | 30.00th=[ 379], 40.00th=[ 388], 50.00th=[ 396], 60.00th=[ 400], 00:19:09.469 | 70.00th=[ 408], 80.00th=[ 429], 90.00th=[ 474], 95.00th=[ 502], 00:19:09.469 | 99.00th=[ 553], 99.50th=[ 594], 99.90th=[ 619], 99.95th=[ 3032], 00:19:09.469 | 99.99th=[ 3032] 00:19:09.469 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:09.469 slat (usec): min=7, max=1295, avg=14.69, stdev=33.67 00:19:09.469 clat (usec): min=173, max=845, avg=225.00, stdev=55.69 00:19:09.469 lat (usec): min=181, max=1860, avg=239.69, stdev=71.51 00:19:09.469 clat percentiles (usec): 00:19:09.469 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:19:09.469 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 223], 00:19:09.469 | 70.00th=[ 231], 80.00th=[ 241], 90.00th=[ 253], 95.00th=[ 269], 00:19:09.469 | 99.00th=[ 523], 99.50th=[ 652], 99.90th=[ 848], 99.95th=[ 848], 00:19:09.469 | 99.99th=[ 848] 00:19:09.469 bw ( KiB/s): min= 8192, max= 8192, per=59.49%, avg=8192.00, stdev= 0.00, samples=1 00:19:09.469 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:09.469 lat (usec) : 250=45.03%, 500=51.83%, 750=3.01%, 1000=0.10% 00:19:09.469 lat (msec) : 4=0.03% 00:19:09.469 cpu : usr=2.80%, sys=4.90%, ctx=3029, majf=0, minf=2 00:19:09.469 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:09.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.469 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.469 issued rwts: total=1491,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.469 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:09.469 job2: (groupid=0, jobs=1): err= 0: pid=1073: Wed Apr 17 06:45:14 2024 00:19:09.469 read: IOPS=19, BW=76.8KiB/s (78.7kB/s)(80.0KiB/1041msec) 00:19:09.469 slat (nsec): min=12905, max=33776, avg=19985.35, stdev=9181.70 00:19:09.469 clat (usec): min=619, max=41873, avg=37129.82, stdev=12486.87 00:19:09.469 lat (usec): min=633, max=41887, avg=37149.81, stdev=12485.60 00:19:09.469 clat percentiles (usec): 00:19:09.469 | 1.00th=[ 619], 5.00th=[ 619], 10.00th=[ 644], 20.00th=[40633], 00:19:09.469 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:09.469 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:19:09.469 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:19:09.469 | 99.99th=[41681] 00:19:09.469 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:19:09.469 slat (nsec): min=8904, max=63699, avg=25196.46, stdev=11711.43 00:19:09.469 clat (usec): min=223, max=2026, avg=549.13, stdev=180.55 00:19:09.469 lat (usec): min=241, max=2062, avg=574.33, stdev=178.80 00:19:09.469 clat percentiles (usec): 00:19:09.469 | 1.00th=[ 258], 5.00th=[ 334], 10.00th=[ 375], 20.00th=[ 424], 00:19:09.469 | 30.00th=[ 457], 40.00th=[ 482], 50.00th=[ 523], 60.00th=[ 562], 00:19:09.469 | 70.00th=[ 619], 80.00th=[ 685], 90.00th=[ 734], 95.00th=[ 791], 00:19:09.469 | 99.00th=[ 996], 99.50th=[ 1745], 99.90th=[ 2024], 99.95th=[ 2024], 00:19:09.469 | 99.99th=[ 2024] 00:19:09.469 bw ( KiB/s): min= 4096, max= 4096, per=29.74%, avg=4096.00, stdev= 0.00, samples=1 00:19:09.469 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:09.469 lat (usec) : 250=0.75%, 500=42.86%, 750=44.92%, 1000=7.14% 00:19:09.469 lat (msec) : 2=0.56%, 4=0.38%, 50=3.38% 00:19:09.469 cpu : usr=0.38%, sys=1.63%, ctx=533, majf=0, minf=1 00:19:09.469 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:09.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.469 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.469 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.469 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:09.469 job3: (groupid=0, jobs=1): err= 0: pid=1074: Wed Apr 17 06:45:14 2024 00:19:09.469 read: IOPS=958, BW=3832KiB/s (3924kB/s)(3836KiB/1001msec) 00:19:09.469 slat (nsec): min=5984, max=66616, avg=18131.27, stdev=10389.90 00:19:09.469 clat (usec): min=366, max=41154, avg=710.13, stdev=1325.53 00:19:09.469 lat (usec): min=379, max=41167, avg=728.26, stdev=1325.81 00:19:09.469 clat percentiles (usec): 00:19:09.469 | 1.00th=[ 400], 5.00th=[ 433], 10.00th=[ 445], 20.00th=[ 506], 00:19:09.469 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 644], 00:19:09.469 | 70.00th=[ 709], 80.00th=[ 783], 90.00th=[ 971], 95.00th=[ 1221], 00:19:09.469 | 99.00th=[ 1385], 99.50th=[ 1483], 99.90th=[41157], 99.95th=[41157], 00:19:09.469 | 99.99th=[41157] 00:19:09.469 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:19:09.469 slat (nsec): min=8166, max=55194, avg=16497.40, stdev=9219.35 00:19:09.469 clat (usec): min=176, max=2037, avg=269.00, stdev=141.54 00:19:09.469 lat (usec): min=184, max=2061, avg=285.49, stdev=145.46 00:19:09.469 clat percentiles (usec): 00:19:09.469 | 1.00th=[ 188], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 204], 00:19:09.469 | 30.00th=[ 221], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 245], 00:19:09.469 | 70.00th=[ 253], 80.00th=[ 273], 90.00th=[ 318], 95.00th=[ 562], 00:19:09.469 | 99.00th=[ 775], 99.50th=[ 865], 99.90th=[ 1876], 99.95th=[ 2040], 00:19:09.469 | 99.99th=[ 2040] 00:19:09.469 bw ( KiB/s): min= 4096, max= 4096, per=29.74%, avg=4096.00, stdev= 0.00, samples=1 00:19:09.469 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:09.469 lat (usec) : 250=34.75%, 500=23.25%, 750=29.85%, 1000=7.46% 00:19:09.469 lat (msec) : 2=4.59%, 4=0.05%, 50=0.05% 00:19:09.469 cpu : usr=2.80%, sys=4.10%, ctx=1985, majf=0, minf=1 00:19:09.469 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:09.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.469 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.469 issued rwts: total=959,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.469 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:09.469 00:19:09.469 Run status group 0 (all jobs): 00:19:09.469 READ: bw=9556KiB/s (9786kB/s), 66.5KiB/s-5958KiB/s (68.1kB/s-6101kB/s), io=9948KiB (10.2MB), run=1001-1041msec 00:19:09.469 WRITE: bw=13.4MiB/s (14.1MB/s), 1967KiB/s-6138KiB/s (2015kB/s-6285kB/s), io=14.0MiB (14.7MB), run=1001-1041msec 00:19:09.469 00:19:09.469 Disk stats (read/write): 00:19:09.469 nvme0n1: ios=34/512, merge=0/0, ticks=1330/283, in_queue=1613, util=85.57% 00:19:09.469 nvme0n2: ios=1197/1536, merge=0/0, ticks=556/319, in_queue=875, util=89.52% 00:19:09.469 nvme0n3: ios=37/512, merge=0/0, ticks=1439/276, in_queue=1715, util=93.52% 00:19:09.469 nvme0n4: ios=771/1024, merge=0/0, ticks=1438/267, in_queue=1705, util=94.21% 00:19:09.469 06:45:14 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:09.469 [global] 00:19:09.469 thread=1 00:19:09.469 invalidate=1 00:19:09.469 rw=randwrite 00:19:09.469 time_based=1 00:19:09.469 runtime=1 00:19:09.469 ioengine=libaio 00:19:09.469 direct=1 00:19:09.469 bs=4096 00:19:09.469 iodepth=1 00:19:09.469 norandommap=0 00:19:09.469 numjobs=1 00:19:09.469 00:19:09.469 verify_dump=1 00:19:09.469 verify_backlog=512 00:19:09.469 verify_state_save=0 00:19:09.469 do_verify=1 00:19:09.469 verify=crc32c-intel 00:19:09.469 [job0] 00:19:09.469 filename=/dev/nvme0n1 00:19:09.469 [job1] 00:19:09.469 filename=/dev/nvme0n2 00:19:09.469 [job2] 00:19:09.469 filename=/dev/nvme0n3 00:19:09.469 [job3] 00:19:09.469 filename=/dev/nvme0n4 00:19:09.727 Could not set queue depth (nvme0n1) 00:19:09.727 Could not set queue depth (nvme0n2) 00:19:09.727 Could not set queue depth (nvme0n3) 00:19:09.727 Could not set queue depth (nvme0n4) 00:19:09.727 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:09.727 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:09.727 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:09.727 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:09.727 fio-3.35 00:19:09.727 Starting 4 threads 00:19:11.104 00:19:11.104 job0: (groupid=0, jobs=1): err= 0: pid=1299: Wed Apr 17 06:45:15 2024 00:19:11.104 read: IOPS=43, BW=176KiB/s (180kB/s)(176KiB/1002msec) 00:19:11.104 slat (nsec): min=7411, max=30453, avg=16682.70, stdev=7255.25 00:19:11.104 clat (usec): min=415, max=41690, avg=18947.26, stdev=20405.57 00:19:11.104 lat (usec): min=429, max=41704, avg=18963.94, stdev=20402.79 00:19:11.104 clat percentiles (usec): 00:19:11.104 | 1.00th=[ 416], 5.00th=[ 465], 10.00th=[ 490], 20.00th=[ 502], 00:19:11.104 | 30.00th=[ 529], 40.00th=[ 562], 50.00th=[ 644], 60.00th=[41157], 00:19:11.104 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:11.104 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:19:11.104 | 99.99th=[41681] 00:19:11.104 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:19:11.104 slat (nsec): min=7042, max=59784, avg=17082.12, stdev=9056.24 00:19:11.104 clat (usec): min=180, max=495, avg=303.83, stdev=68.47 00:19:11.104 lat (usec): min=192, max=536, avg=320.91, stdev=68.46 00:19:11.104 clat percentiles (usec): 00:19:11.104 | 1.00th=[ 198], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 233], 00:19:11.104 | 30.00th=[ 260], 40.00th=[ 285], 50.00th=[ 302], 60.00th=[ 318], 00:19:11.104 | 70.00th=[ 334], 80.00th=[ 363], 90.00th=[ 400], 95.00th=[ 437], 00:19:11.104 | 99.00th=[ 478], 99.50th=[ 490], 99.90th=[ 494], 99.95th=[ 494], 00:19:11.104 | 99.99th=[ 494] 00:19:11.104 bw ( KiB/s): min= 4096, max= 4096, per=29.71%, avg=4096.00, stdev= 0.00, samples=1 00:19:11.104 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:11.104 lat (usec) : 250=24.64%, 500=68.88%, 750=2.88% 00:19:11.104 lat (msec) : 50=3.60% 00:19:11.104 cpu : usr=0.40%, sys=1.10%, ctx=557, majf=0, minf=1 00:19:11.104 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:11.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.105 issued rwts: total=44,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.105 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:11.105 job1: (groupid=0, jobs=1): err= 0: pid=1300: Wed Apr 17 06:45:15 2024 00:19:11.105 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:19:11.105 slat (nsec): min=4657, max=71025, avg=11083.58, stdev=7717.61 00:19:11.105 clat (usec): min=287, max=41278, avg=1461.07, stdev=6636.73 00:19:11.105 lat (usec): min=298, max=41308, avg=1472.15, stdev=6637.45 00:19:11.105 clat percentiles (usec): 00:19:11.105 | 1.00th=[ 297], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 318], 00:19:11.105 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 343], 60.00th=[ 375], 00:19:11.105 | 70.00th=[ 379], 80.00th=[ 383], 90.00th=[ 388], 95.00th=[ 412], 00:19:11.105 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:11.105 | 99.99th=[41157] 00:19:11.105 write: IOPS=896, BW=3584KiB/s (3670kB/s)(3588KiB/1001msec); 0 zone resets 00:19:11.105 slat (nsec): min=6639, max=59075, avg=17455.08, stdev=9321.70 00:19:11.105 clat (usec): min=187, max=1230, avg=250.93, stdev=68.06 00:19:11.105 lat (usec): min=196, max=1237, avg=268.39, stdev=69.03 00:19:11.105 clat percentiles (usec): 00:19:11.105 | 1.00th=[ 192], 5.00th=[ 204], 10.00th=[ 212], 20.00th=[ 219], 00:19:11.105 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 243], 00:19:11.105 | 70.00th=[ 251], 80.00th=[ 269], 90.00th=[ 297], 95.00th=[ 343], 00:19:11.105 | 99.00th=[ 416], 99.50th=[ 742], 99.90th=[ 1237], 99.95th=[ 1237], 00:19:11.105 | 99.99th=[ 1237] 00:19:11.105 bw ( KiB/s): min= 4096, max= 4096, per=29.71%, avg=4096.00, stdev= 0.00, samples=1 00:19:11.105 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:11.105 lat (usec) : 250=44.29%, 500=54.08%, 750=0.35%, 1000=0.21% 00:19:11.105 lat (msec) : 2=0.07%, 50=0.99% 00:19:11.105 cpu : usr=1.40%, sys=2.00%, ctx=1410, majf=0, minf=1 00:19:11.105 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:11.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.105 issued rwts: total=512,897,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.105 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:11.105 job2: (groupid=0, jobs=1): err= 0: pid=1307: Wed Apr 17 06:45:15 2024 00:19:11.105 read: IOPS=19, BW=79.8KiB/s (81.7kB/s)(80.0KiB/1003msec) 00:19:11.105 slat (nsec): min=11307, max=31030, avg=14370.95, stdev=4111.08 00:19:11.105 clat (usec): min=40891, max=42954, avg=41177.49, stdev=518.80 00:19:11.105 lat (usec): min=40904, max=42972, avg=41191.86, stdev=520.93 00:19:11.105 clat percentiles (usec): 00:19:11.105 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:11.105 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:11.105 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:19:11.105 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:19:11.105 | 99.99th=[42730] 00:19:11.105 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:19:11.105 slat (nsec): min=7894, max=68930, avg=18588.76, stdev=10539.02 00:19:11.105 clat (usec): min=193, max=952, avg=324.90, stdev=69.72 00:19:11.105 lat (usec): min=216, max=983, avg=343.49, stdev=70.92 00:19:11.105 clat percentiles (usec): 00:19:11.105 | 1.00th=[ 210], 5.00th=[ 235], 10.00th=[ 245], 20.00th=[ 265], 00:19:11.105 | 30.00th=[ 285], 40.00th=[ 297], 50.00th=[ 314], 60.00th=[ 334], 00:19:11.105 | 70.00th=[ 355], 80.00th=[ 375], 90.00th=[ 408], 95.00th=[ 449], 00:19:11.105 | 99.00th=[ 490], 99.50th=[ 515], 99.90th=[ 955], 99.95th=[ 955], 00:19:11.105 | 99.99th=[ 955] 00:19:11.105 bw ( KiB/s): min= 4096, max= 4096, per=29.71%, avg=4096.00, stdev= 0.00, samples=1 00:19:11.105 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:11.105 lat (usec) : 250=10.90%, 500=84.59%, 750=0.56%, 1000=0.19% 00:19:11.105 lat (msec) : 50=3.76% 00:19:11.105 cpu : usr=1.00%, sys=0.90%, ctx=533, majf=0, minf=2 00:19:11.105 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:11.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.105 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.105 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:11.105 job3: (groupid=0, jobs=1): err= 0: pid=1308: Wed Apr 17 06:45:15 2024 00:19:11.105 read: IOPS=1324, BW=5299KiB/s (5426kB/s)(5304KiB/1001msec) 00:19:11.105 slat (nsec): min=5717, max=47020, avg=8899.63, stdev=4893.90 00:19:11.105 clat (usec): min=370, max=646, avg=426.07, stdev=25.68 00:19:11.105 lat (usec): min=379, max=652, avg=434.97, stdev=27.54 00:19:11.105 clat percentiles (usec): 00:19:11.105 | 1.00th=[ 388], 5.00th=[ 396], 10.00th=[ 400], 20.00th=[ 408], 00:19:11.105 | 30.00th=[ 412], 40.00th=[ 416], 50.00th=[ 424], 60.00th=[ 429], 00:19:11.105 | 70.00th=[ 433], 80.00th=[ 441], 90.00th=[ 453], 95.00th=[ 474], 00:19:11.105 | 99.00th=[ 515], 99.50th=[ 529], 99.90th=[ 611], 99.95th=[ 644], 00:19:11.105 | 99.99th=[ 644] 00:19:11.105 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:11.105 slat (nsec): min=7074, max=67791, avg=11764.88, stdev=8166.19 00:19:11.105 clat (usec): min=194, max=562, avg=258.01, stdev=73.60 00:19:11.105 lat (usec): min=202, max=594, avg=269.77, stdev=78.60 00:19:11.105 clat percentiles (usec): 00:19:11.105 | 1.00th=[ 198], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 210], 00:19:11.105 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 231], 00:19:11.105 | 70.00th=[ 251], 80.00th=[ 310], 90.00th=[ 383], 95.00th=[ 429], 00:19:11.105 | 99.00th=[ 494], 99.50th=[ 529], 99.90th=[ 562], 99.95th=[ 562], 00:19:11.105 | 99.99th=[ 562] 00:19:11.105 bw ( KiB/s): min= 7288, max= 7288, per=52.86%, avg=7288.00, stdev= 0.00, samples=1 00:19:11.105 iops : min= 1822, max= 1822, avg=1822.00, stdev= 0.00, samples=1 00:19:11.105 lat (usec) : 250=37.46%, 500=61.11%, 750=1.43% 00:19:11.105 cpu : usr=2.70%, sys=3.60%, ctx=2863, majf=0, minf=1 00:19:11.105 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:11.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.105 issued rwts: total=1326,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.105 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:11.105 00:19:11.105 Run status group 0 (all jobs): 00:19:11.105 READ: bw=7585KiB/s (7767kB/s), 79.8KiB/s-5299KiB/s (81.7kB/s-5426kB/s), io=7608KiB (7791kB), run=1001-1003msec 00:19:11.105 WRITE: bw=13.5MiB/s (14.1MB/s), 2042KiB/s-6138KiB/s (2091kB/s-6285kB/s), io=13.5MiB (14.2MB), run=1001-1003msec 00:19:11.105 00:19:11.105 Disk stats (read/write): 00:19:11.105 nvme0n1: ios=90/512, merge=0/0, ticks=1437/149, in_queue=1586, util=85.57% 00:19:11.105 nvme0n2: ios=426/512, merge=0/0, ticks=1119/131, in_queue=1250, util=91.27% 00:19:11.105 nvme0n3: ios=73/512, merge=0/0, ticks=1009/161, in_queue=1170, util=93.63% 00:19:11.105 nvme0n4: ios=1081/1487, merge=0/0, ticks=524/354, in_queue=878, util=96.00% 00:19:11.105 06:45:15 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:11.105 [global] 00:19:11.105 thread=1 00:19:11.105 invalidate=1 00:19:11.105 rw=write 00:19:11.105 time_based=1 00:19:11.105 runtime=1 00:19:11.105 ioengine=libaio 00:19:11.105 direct=1 00:19:11.105 bs=4096 00:19:11.105 iodepth=128 00:19:11.105 norandommap=0 00:19:11.105 numjobs=1 00:19:11.105 00:19:11.105 verify_dump=1 00:19:11.105 verify_backlog=512 00:19:11.105 verify_state_save=0 00:19:11.105 do_verify=1 00:19:11.105 verify=crc32c-intel 00:19:11.105 [job0] 00:19:11.105 filename=/dev/nvme0n1 00:19:11.105 [job1] 00:19:11.105 filename=/dev/nvme0n2 00:19:11.105 [job2] 00:19:11.105 filename=/dev/nvme0n3 00:19:11.105 [job3] 00:19:11.105 filename=/dev/nvme0n4 00:19:11.105 Could not set queue depth (nvme0n1) 00:19:11.105 Could not set queue depth (nvme0n2) 00:19:11.105 Could not set queue depth (nvme0n3) 00:19:11.105 Could not set queue depth (nvme0n4) 00:19:11.363 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:11.363 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:11.363 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:11.363 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:11.363 fio-3.35 00:19:11.363 Starting 4 threads 00:19:12.737 00:19:12.737 job0: (groupid=0, jobs=1): err= 0: pid=1546: Wed Apr 17 06:45:16 2024 00:19:12.737 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:19:12.737 slat (usec): min=2, max=15306, avg=109.15, stdev=638.55 00:19:12.737 clat (usec): min=1092, max=31512, avg=14519.40, stdev=1789.15 00:19:12.737 lat (usec): min=1096, max=31521, avg=14628.55, stdev=1819.15 00:19:12.737 clat percentiles (usec): 00:19:12.737 | 1.00th=[11338], 5.00th=[11994], 10.00th=[12649], 20.00th=[13304], 00:19:12.737 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14222], 60.00th=[14746], 00:19:12.737 | 70.00th=[15533], 80.00th=[15926], 90.00th=[16909], 95.00th=[17695], 00:19:12.737 | 99.00th=[18220], 99.50th=[19006], 99.90th=[26870], 99.95th=[27132], 00:19:12.737 | 99.99th=[31589] 00:19:12.737 write: IOPS=4139, BW=16.2MiB/s (17.0MB/s)(16.2MiB/1005msec); 0 zone resets 00:19:12.737 slat (usec): min=3, max=12464, avg=123.59, stdev=677.11 00:19:12.737 clat (usec): min=609, max=51765, avg=16319.06, stdev=4601.10 00:19:12.737 lat (usec): min=1389, max=51770, avg=16442.65, stdev=4638.62 00:19:12.737 clat percentiles (usec): 00:19:12.737 | 1.00th=[ 3163], 5.00th=[ 7177], 10.00th=[11469], 20.00th=[14091], 00:19:12.737 | 30.00th=[15795], 40.00th=[16319], 50.00th=[16712], 60.00th=[17433], 00:19:12.737 | 70.00th=[17957], 80.00th=[19268], 90.00th=[19530], 95.00th=[20579], 00:19:12.737 | 99.00th=[31851], 99.50th=[42730], 99.90th=[46924], 99.95th=[46924], 00:19:12.737 | 99.99th=[51643] 00:19:12.737 bw ( KiB/s): min=16384, max=16384, per=29.08%, avg=16384.00, stdev= 0.00, samples=2 00:19:12.737 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:19:12.737 lat (usec) : 750=0.01%, 1000=0.01% 00:19:12.737 lat (msec) : 2=0.06%, 4=0.70%, 10=3.38%, 20=92.66%, 50=3.16% 00:19:12.737 lat (msec) : 100=0.01% 00:19:12.737 cpu : usr=4.58%, sys=6.97%, ctx=384, majf=0, minf=1 00:19:12.737 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:12.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:12.737 issued rwts: total=4096,4160,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.737 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:12.737 job1: (groupid=0, jobs=1): err= 0: pid=1556: Wed Apr 17 06:45:16 2024 00:19:12.737 read: IOPS=3353, BW=13.1MiB/s (13.7MB/s)(13.7MiB/1048msec) 00:19:12.737 slat (usec): min=2, max=12367, avg=140.03, stdev=883.54 00:19:12.737 clat (usec): min=11693, max=57196, avg=19558.26, stdev=7402.12 00:19:12.737 lat (usec): min=11699, max=64685, avg=19698.29, stdev=7450.34 00:19:12.737 clat percentiles (usec): 00:19:12.737 | 1.00th=[12649], 5.00th=[14222], 10.00th=[14877], 20.00th=[15795], 00:19:12.737 | 30.00th=[16057], 40.00th=[17171], 50.00th=[17695], 60.00th=[18220], 00:19:12.737 | 70.00th=[19268], 80.00th=[20317], 90.00th=[26346], 95.00th=[29230], 00:19:12.737 | 99.00th=[56886], 99.50th=[56886], 99.90th=[57410], 99.95th=[57410], 00:19:12.737 | 99.99th=[57410] 00:19:12.737 write: IOPS=3419, BW=13.4MiB/s (14.0MB/s)(14.0MiB/1048msec); 0 zone resets 00:19:12.737 slat (usec): min=4, max=10104, avg=134.53, stdev=814.77 00:19:12.737 clat (usec): min=7157, max=46740, avg=17794.18, stdev=7619.73 00:19:12.737 lat (usec): min=7200, max=46765, avg=17928.71, stdev=7693.75 00:19:12.737 clat percentiles (usec): 00:19:12.737 | 1.00th=[10814], 5.00th=[11863], 10.00th=[12256], 20.00th=[13304], 00:19:12.737 | 30.00th=[13698], 40.00th=[14746], 50.00th=[15401], 60.00th=[15795], 00:19:12.737 | 70.00th=[16909], 80.00th=[19268], 90.00th=[30016], 95.00th=[35914], 00:19:12.737 | 99.00th=[45876], 99.50th=[45876], 99.90th=[46924], 99.95th=[46924], 00:19:12.737 | 99.99th=[46924] 00:19:12.737 bw ( KiB/s): min=12288, max=16384, per=25.44%, avg=14336.00, stdev=2896.31, samples=2 00:19:12.737 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:19:12.737 lat (msec) : 10=0.21%, 20=76.47%, 50=22.43%, 100=0.89% 00:19:12.737 cpu : usr=4.01%, sys=5.64%, ctx=236, majf=0, minf=1 00:19:12.737 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:12.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:12.737 issued rwts: total=3514,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.737 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:12.738 job2: (groupid=0, jobs=1): err= 0: pid=1594: Wed Apr 17 06:45:16 2024 00:19:12.738 read: IOPS=4780, BW=18.7MiB/s (19.6MB/s)(18.8MiB/1005msec) 00:19:12.738 slat (usec): min=2, max=11966, avg=94.81, stdev=548.53 00:19:12.738 clat (usec): min=1628, max=28163, avg=12354.72, stdev=2041.95 00:19:12.738 lat (usec): min=5899, max=28194, avg=12449.54, stdev=2071.51 00:19:12.738 clat percentiles (usec): 00:19:12.738 | 1.00th=[ 6390], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[11207], 00:19:12.738 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12256], 60.00th=[12387], 00:19:12.738 | 70.00th=[12911], 80.00th=[13304], 90.00th=[14746], 95.00th=[15401], 00:19:12.738 | 99.00th=[21890], 99.50th=[22152], 99.90th=[22414], 99.95th=[22676], 00:19:12.738 | 99.99th=[28181] 00:19:12.738 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:19:12.738 slat (usec): min=4, max=17892, avg=98.20, stdev=673.26 00:19:12.738 clat (usec): min=5947, max=43945, avg=13222.85, stdev=4930.81 00:19:12.738 lat (usec): min=5953, max=43996, avg=13321.05, stdev=4972.63 00:19:12.738 clat percentiles (usec): 00:19:12.738 | 1.00th=[ 7242], 5.00th=[ 8586], 10.00th=[10683], 20.00th=[11338], 00:19:12.738 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:19:12.738 | 70.00th=[12518], 80.00th=[12911], 90.00th=[17171], 95.00th=[25297], 00:19:12.738 | 99.00th=[37487], 99.50th=[38011], 99.90th=[38011], 99.95th=[41681], 00:19:12.738 | 99.99th=[43779] 00:19:12.738 bw ( KiB/s): min=19768, max=21192, per=36.35%, avg=20480.00, stdev=1006.92, samples=2 00:19:12.738 iops : min= 4942, max= 5298, avg=5120.00, stdev=251.73, samples=2 00:19:12.738 lat (msec) : 2=0.01%, 10=7.53%, 20=88.59%, 50=3.87% 00:19:12.738 cpu : usr=4.88%, sys=9.56%, ctx=445, majf=0, minf=1 00:19:12.738 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:12.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.738 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:12.738 issued rwts: total=4804,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.738 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:12.738 job3: (groupid=0, jobs=1): err= 0: pid=1608: Wed Apr 17 06:45:16 2024 00:19:12.738 read: IOPS=1523, BW=6095KiB/s (6242kB/s)(6144KiB/1008msec) 00:19:12.738 slat (usec): min=3, max=16344, avg=280.24, stdev=1474.12 00:19:12.738 clat (usec): min=14357, max=60102, avg=34740.15, stdev=9863.96 00:19:12.738 lat (usec): min=14364, max=60140, avg=35020.39, stdev=9980.38 00:19:12.738 clat percentiles (usec): 00:19:12.738 | 1.00th=[19792], 5.00th=[21103], 10.00th=[21627], 20.00th=[22938], 00:19:12.738 | 30.00th=[29230], 40.00th=[30278], 50.00th=[35914], 60.00th=[38011], 00:19:12.738 | 70.00th=[42206], 80.00th=[44303], 90.00th=[48497], 95.00th=[50070], 00:19:12.738 | 99.00th=[51119], 99.50th=[54264], 99.90th=[57410], 99.95th=[60031], 00:19:12.738 | 99.99th=[60031] 00:19:12.738 write: IOPS=1882, BW=7532KiB/s (7713kB/s)(7592KiB/1008msec); 0 zone resets 00:19:12.738 slat (usec): min=4, max=22564, avg=292.21, stdev=1283.59 00:19:12.738 clat (usec): min=3856, max=77998, avg=36058.14, stdev=19129.01 00:19:12.738 lat (usec): min=9490, max=78006, avg=36350.35, stdev=19257.66 00:19:12.738 clat percentiles (usec): 00:19:12.738 | 1.00th=[13698], 5.00th=[17171], 10.00th=[17171], 20.00th=[17695], 00:19:12.738 | 30.00th=[19268], 40.00th=[21627], 50.00th=[27395], 60.00th=[42730], 00:19:12.738 | 70.00th=[53216], 80.00th=[56886], 90.00th=[61080], 95.00th=[66847], 00:19:12.738 | 99.00th=[78119], 99.50th=[78119], 99.90th=[78119], 99.95th=[78119], 00:19:12.738 | 99.99th=[78119] 00:19:12.738 bw ( KiB/s): min= 5760, max= 8400, per=12.57%, avg=7080.00, stdev=1866.76, samples=2 00:19:12.738 iops : min= 1440, max= 2100, avg=1770.00, stdev=466.69, samples=2 00:19:12.738 lat (msec) : 4=0.03%, 10=0.23%, 20=19.07%, 50=58.47%, 100=22.19% 00:19:12.738 cpu : usr=1.99%, sys=4.27%, ctx=195, majf=0, minf=1 00:19:12.738 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:19:12.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.738 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:12.738 issued rwts: total=1536,1898,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.738 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:12.738 00:19:12.738 Run status group 0 (all jobs): 00:19:12.738 READ: bw=52.0MiB/s (54.5MB/s), 6095KiB/s-18.7MiB/s (6242kB/s-19.6MB/s), io=54.5MiB (57.1MB), run=1005-1048msec 00:19:12.738 WRITE: bw=55.0MiB/s (57.7MB/s), 7532KiB/s-19.9MiB/s (7713kB/s-20.9MB/s), io=57.7MiB (60.5MB), run=1005-1048msec 00:19:12.738 00:19:12.738 Disk stats (read/write): 00:19:12.738 nvme0n1: ios=3414/3584, merge=0/0, ticks=20696/24004, in_queue=44700, util=90.88% 00:19:12.738 nvme0n2: ios=2737/3072, merge=0/0, ticks=26061/26235, in_queue=52296, util=97.36% 00:19:12.738 nvme0n3: ios=4152/4159, merge=0/0, ticks=25560/25189, in_queue=50749, util=90.58% 00:19:12.738 nvme0n4: ios=1471/1536, merge=0/0, ticks=16419/16064, in_queue=32483, util=98.31% 00:19:12.738 06:45:16 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:12.738 [global] 00:19:12.738 thread=1 00:19:12.738 invalidate=1 00:19:12.738 rw=randwrite 00:19:12.738 time_based=1 00:19:12.738 runtime=1 00:19:12.738 ioengine=libaio 00:19:12.738 direct=1 00:19:12.738 bs=4096 00:19:12.738 iodepth=128 00:19:12.738 norandommap=0 00:19:12.738 numjobs=1 00:19:12.738 00:19:12.738 verify_dump=1 00:19:12.738 verify_backlog=512 00:19:12.738 verify_state_save=0 00:19:12.738 do_verify=1 00:19:12.738 verify=crc32c-intel 00:19:12.738 [job0] 00:19:12.738 filename=/dev/nvme0n1 00:19:12.738 [job1] 00:19:12.738 filename=/dev/nvme0n2 00:19:12.738 [job2] 00:19:12.738 filename=/dev/nvme0n3 00:19:12.738 [job3] 00:19:12.738 filename=/dev/nvme0n4 00:19:12.738 Could not set queue depth (nvme0n1) 00:19:12.738 Could not set queue depth (nvme0n2) 00:19:12.738 Could not set queue depth (nvme0n3) 00:19:12.738 Could not set queue depth (nvme0n4) 00:19:12.738 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:12.738 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:12.738 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:12.738 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:12.738 fio-3.35 00:19:12.738 Starting 4 threads 00:19:14.113 00:19:14.113 job0: (groupid=0, jobs=1): err= 0: pid=1905: Wed Apr 17 06:45:18 2024 00:19:14.113 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:19:14.113 slat (usec): min=2, max=29785, avg=169.10, stdev=1367.66 00:19:14.113 clat (usec): min=6180, max=69698, avg=22657.91, stdev=11885.59 00:19:14.113 lat (usec): min=6190, max=73542, avg=22827.00, stdev=11975.52 00:19:14.113 clat percentiles (usec): 00:19:14.113 | 1.00th=[ 6259], 5.00th=[ 8848], 10.00th=[11076], 20.00th=[11994], 00:19:14.113 | 30.00th=[14746], 40.00th=[18482], 50.00th=[21103], 60.00th=[23200], 00:19:14.113 | 70.00th=[25297], 80.00th=[31065], 90.00th=[36439], 95.00th=[48497], 00:19:14.113 | 99.00th=[62653], 99.50th=[64226], 99.90th=[66847], 99.95th=[69731], 00:19:14.113 | 99.99th=[69731] 00:19:14.113 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:19:14.113 slat (usec): min=3, max=22136, avg=176.54, stdev=1070.58 00:19:14.113 clat (usec): min=1039, max=67418, avg=22651.69, stdev=9687.88 00:19:14.113 lat (usec): min=1046, max=67426, avg=22828.23, stdev=9778.59 00:19:14.113 clat percentiles (usec): 00:19:14.113 | 1.00th=[ 7570], 5.00th=[ 9896], 10.00th=[11731], 20.00th=[12911], 00:19:14.113 | 30.00th=[17171], 40.00th=[18744], 50.00th=[21365], 60.00th=[23987], 00:19:14.113 | 70.00th=[27132], 80.00th=[29492], 90.00th=[34341], 95.00th=[40109], 00:19:14.113 | 99.00th=[56886], 99.50th=[62129], 99.90th=[67634], 99.95th=[67634], 00:19:14.113 | 99.99th=[67634] 00:19:14.113 bw ( KiB/s): min=10648, max=12864, per=21.32%, avg=11756.00, stdev=1566.95, samples=2 00:19:14.113 iops : min= 2662, max= 3216, avg=2939.00, stdev=391.74, samples=2 00:19:14.113 lat (msec) : 2=0.07%, 4=0.02%, 10=6.29%, 20=39.66%, 50=51.07% 00:19:14.113 lat (msec) : 100=2.90% 00:19:14.113 cpu : usr=2.89%, sys=5.58%, ctx=260, majf=0, minf=1 00:19:14.113 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:14.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:14.113 issued rwts: total=2560,3066,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.113 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.113 job1: (groupid=0, jobs=1): err= 0: pid=1906: Wed Apr 17 06:45:18 2024 00:19:14.113 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:19:14.113 slat (usec): min=2, max=25605, avg=132.14, stdev=1093.59 00:19:14.113 clat (usec): min=3024, max=54862, avg=19218.94, stdev=9695.37 00:19:14.113 lat (usec): min=3038, max=55337, avg=19351.08, stdev=9729.88 00:19:14.113 clat percentiles (usec): 00:19:14.113 | 1.00th=[ 6259], 5.00th=[ 9241], 10.00th=[10159], 20.00th=[12125], 00:19:14.113 | 30.00th=[13042], 40.00th=[14222], 50.00th=[16188], 60.00th=[20055], 00:19:14.113 | 70.00th=[21627], 80.00th=[26346], 90.00th=[31327], 95.00th=[36963], 00:19:14.113 | 99.00th=[52691], 99.50th=[53740], 99.90th=[54264], 99.95th=[54789], 00:19:14.113 | 99.99th=[54789] 00:19:14.113 write: IOPS=3632, BW=14.2MiB/s (14.9MB/s)(14.3MiB/1010msec); 0 zone resets 00:19:14.113 slat (usec): min=3, max=18188, avg=119.41, stdev=907.39 00:19:14.113 clat (usec): min=1432, max=46532, avg=16128.69, stdev=8877.28 00:19:14.113 lat (usec): min=2649, max=46548, avg=16248.10, stdev=8930.17 00:19:14.113 clat percentiles (usec): 00:19:14.113 | 1.00th=[ 4621], 5.00th=[ 6456], 10.00th=[ 8291], 20.00th=[10290], 00:19:14.113 | 30.00th=[11338], 40.00th=[11994], 50.00th=[12911], 60.00th=[14222], 00:19:14.113 | 70.00th=[17695], 80.00th=[21365], 90.00th=[28181], 95.00th=[35914], 00:19:14.113 | 99.00th=[42730], 99.50th=[42730], 99.90th=[45876], 99.95th=[45876], 00:19:14.113 | 99.99th=[46400] 00:19:14.113 bw ( KiB/s): min=12288, max=16384, per=25.99%, avg=14336.00, stdev=2896.31, samples=2 00:19:14.113 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:19:14.113 lat (msec) : 2=0.01%, 4=0.51%, 10=13.57%, 20=55.16%, 50=29.45% 00:19:14.113 lat (msec) : 100=1.30% 00:19:14.113 cpu : usr=3.27%, sys=4.86%, ctx=336, majf=0, minf=1 00:19:14.113 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:19:14.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:14.113 issued rwts: total=3584,3669,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.113 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.113 job2: (groupid=0, jobs=1): err= 0: pid=1907: Wed Apr 17 06:45:18 2024 00:19:14.113 read: IOPS=2532, BW=9.89MiB/s (10.4MB/s)(10.0MiB/1011msec) 00:19:14.113 slat (usec): min=3, max=30526, avg=218.99, stdev=1489.22 00:19:14.113 clat (usec): min=9031, max=73870, avg=29100.32, stdev=15205.53 00:19:14.113 lat (usec): min=9044, max=73887, avg=29319.32, stdev=15302.90 00:19:14.113 clat percentiles (usec): 00:19:14.113 | 1.00th=[11731], 5.00th=[11994], 10.00th=[15008], 20.00th=[16581], 00:19:14.113 | 30.00th=[18220], 40.00th=[20317], 50.00th=[21890], 60.00th=[29492], 00:19:14.113 | 70.00th=[34341], 80.00th=[42730], 90.00th=[54789], 95.00th=[57934], 00:19:14.113 | 99.00th=[68682], 99.50th=[68682], 99.90th=[69731], 99.95th=[72877], 00:19:14.113 | 99.99th=[73925] 00:19:14.113 write: IOPS=2567, BW=10.0MiB/s (10.5MB/s)(10.1MiB/1011msec); 0 zone resets 00:19:14.113 slat (usec): min=3, max=15016, avg=160.92, stdev=1047.36 00:19:14.113 clat (usec): min=8172, max=51965, avg=20416.39, stdev=8774.96 00:19:14.113 lat (usec): min=8183, max=52007, avg=20577.31, stdev=8847.40 00:19:14.113 clat percentiles (usec): 00:19:14.113 | 1.00th=[ 9503], 5.00th=[11207], 10.00th=[11600], 20.00th=[12911], 00:19:14.113 | 30.00th=[13435], 40.00th=[15008], 50.00th=[16581], 60.00th=[20579], 00:19:14.113 | 70.00th=[27395], 80.00th=[29754], 90.00th=[32375], 95.00th=[35914], 00:19:14.113 | 99.00th=[41157], 99.50th=[41157], 99.90th=[44303], 99.95th=[45351], 00:19:14.113 | 99.99th=[52167] 00:19:14.113 bw ( KiB/s): min=10176, max=10304, per=18.57%, avg=10240.00, stdev=90.51, samples=2 00:19:14.113 iops : min= 2544, max= 2576, avg=2560.00, stdev=22.63, samples=2 00:19:14.113 lat (msec) : 10=0.87%, 20=48.08%, 50=44.10%, 100=6.94% 00:19:14.113 cpu : usr=2.77%, sys=4.46%, ctx=191, majf=0, minf=1 00:19:14.113 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:14.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:14.113 issued rwts: total=2560,2596,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.113 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.113 job3: (groupid=0, jobs=1): err= 0: pid=1908: Wed Apr 17 06:45:18 2024 00:19:14.113 read: IOPS=4528, BW=17.7MiB/s (18.6MB/s)(17.8MiB/1006msec) 00:19:14.113 slat (usec): min=2, max=17539, avg=117.63, stdev=900.33 00:19:14.113 clat (usec): min=2769, max=50735, avg=15443.03, stdev=6663.43 00:19:14.113 lat (usec): min=2778, max=50761, avg=15560.67, stdev=6729.14 00:19:14.113 clat percentiles (usec): 00:19:14.113 | 1.00th=[ 2966], 5.00th=[ 8225], 10.00th=[ 9634], 20.00th=[11469], 00:19:14.113 | 30.00th=[11994], 40.00th=[12649], 50.00th=[13304], 60.00th=[14877], 00:19:14.113 | 70.00th=[17433], 80.00th=[18482], 90.00th=[23987], 95.00th=[32113], 00:19:14.113 | 99.00th=[39584], 99.50th=[39584], 99.90th=[40109], 99.95th=[42730], 00:19:14.113 | 99.99th=[50594] 00:19:14.113 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:19:14.113 slat (usec): min=3, max=20519, avg=79.02, stdev=705.76 00:19:14.113 clat (usec): min=1044, max=41248, avg=12406.06, stdev=5673.18 00:19:14.113 lat (usec): min=1052, max=41301, avg=12485.08, stdev=5704.86 00:19:14.113 clat percentiles (usec): 00:19:14.113 | 1.00th=[ 3687], 5.00th=[ 5276], 10.00th=[ 6915], 20.00th=[ 8029], 00:19:14.113 | 30.00th=[ 9110], 40.00th=[10290], 50.00th=[12125], 60.00th=[12649], 00:19:14.113 | 70.00th=[13566], 80.00th=[15401], 90.00th=[19792], 95.00th=[24249], 00:19:14.113 | 99.00th=[34866], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:19:14.113 | 99.99th=[41157] 00:19:14.113 bw ( KiB/s): min=17920, max=18981, per=33.45%, avg=18450.50, stdev=750.24, samples=2 00:19:14.113 iops : min= 4480, max= 4745, avg=4612.50, stdev=187.38, samples=2 00:19:14.113 lat (msec) : 2=0.10%, 4=1.90%, 10=22.96%, 20=63.23%, 50=11.81% 00:19:14.113 lat (msec) : 100=0.01% 00:19:14.113 cpu : usr=3.78%, sys=7.36%, ctx=332, majf=0, minf=1 00:19:14.113 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:14.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:14.113 issued rwts: total=4556,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.113 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.113 00:19:14.113 Run status group 0 (all jobs): 00:19:14.113 READ: bw=51.2MiB/s (53.7MB/s), 9.89MiB/s-17.7MiB/s (10.4MB/s-18.6MB/s), io=51.8MiB (54.3MB), run=1005-1011msec 00:19:14.113 WRITE: bw=53.9MiB/s (56.5MB/s), 10.0MiB/s-17.9MiB/s (10.5MB/s-18.8MB/s), io=54.4MiB (57.1MB), run=1005-1011msec 00:19:14.113 00:19:14.113 Disk stats (read/write): 00:19:14.113 nvme0n1: ios=2556/2560, merge=0/0, ticks=30488/27045, in_queue=57533, util=87.47% 00:19:14.113 nvme0n2: ios=3122/3140, merge=0/0, ticks=38601/30660, in_queue=69261, util=93.81% 00:19:14.113 nvme0n3: ios=2106/2172, merge=0/0, ticks=19911/15255, in_queue=35166, util=97.08% 00:19:14.113 nvme0n4: ios=3823/4096, merge=0/0, ticks=49707/46214, in_queue=95921, util=98.53% 00:19:14.113 06:45:18 -- target/fio.sh@55 -- # sync 00:19:14.113 06:45:18 -- target/fio.sh@59 -- # fio_pid=2065 00:19:14.113 06:45:18 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:14.113 06:45:18 -- target/fio.sh@61 -- # sleep 3 00:19:14.113 [global] 00:19:14.113 thread=1 00:19:14.113 invalidate=1 00:19:14.113 rw=read 00:19:14.113 time_based=1 00:19:14.113 runtime=10 00:19:14.113 ioengine=libaio 00:19:14.114 direct=1 00:19:14.114 bs=4096 00:19:14.114 iodepth=1 00:19:14.114 norandommap=1 00:19:14.114 numjobs=1 00:19:14.114 00:19:14.114 [job0] 00:19:14.114 filename=/dev/nvme0n1 00:19:14.114 [job1] 00:19:14.114 filename=/dev/nvme0n2 00:19:14.114 [job2] 00:19:14.114 filename=/dev/nvme0n3 00:19:14.114 [job3] 00:19:14.114 filename=/dev/nvme0n4 00:19:14.114 Could not set queue depth (nvme0n1) 00:19:14.114 Could not set queue depth (nvme0n2) 00:19:14.114 Could not set queue depth (nvme0n3) 00:19:14.114 Could not set queue depth (nvme0n4) 00:19:14.114 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:14.114 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:14.114 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:14.114 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:14.114 fio-3.35 00:19:14.114 Starting 4 threads 00:19:17.392 06:45:21 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:17.392 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=3420160, buflen=4096 00:19:17.392 fio: pid=2176, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:17.392 06:45:21 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:17.392 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=348160, buflen=4096 00:19:17.392 fio: pid=2175, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:17.392 06:45:21 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:17.392 06:45:21 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:17.650 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=35475456, buflen=4096 00:19:17.650 fio: pid=2173, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:17.650 06:45:22 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:17.650 06:45:22 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:17.908 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=360448, buflen=4096 00:19:17.908 fio: pid=2174, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:17.908 06:45:22 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:17.908 06:45:22 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:17.908 00:19:17.909 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2173: Wed Apr 17 06:45:22 2024 00:19:17.909 read: IOPS=2562, BW=10.0MiB/s (10.5MB/s)(33.8MiB/3380msec) 00:19:17.909 slat (usec): min=4, max=10899, avg=16.58, stdev=192.04 00:19:17.909 clat (usec): min=255, max=41060, avg=370.47, stdev=464.69 00:19:17.909 lat (usec): min=261, max=41067, avg=387.05, stdev=503.11 00:19:17.909 clat percentiles (usec): 00:19:17.909 | 1.00th=[ 277], 5.00th=[ 297], 10.00th=[ 302], 20.00th=[ 314], 00:19:17.909 | 30.00th=[ 326], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 359], 00:19:17.909 | 70.00th=[ 375], 80.00th=[ 392], 90.00th=[ 465], 95.00th=[ 502], 00:19:17.909 | 99.00th=[ 594], 99.50th=[ 644], 99.90th=[ 1037], 99.95th=[ 1221], 00:19:17.909 | 99.99th=[41157] 00:19:17.909 bw ( KiB/s): min= 8224, max=11368, per=96.27%, avg=10198.67, stdev=1056.28, samples=6 00:19:17.909 iops : min= 2056, max= 2842, avg=2549.67, stdev=264.07, samples=6 00:19:17.909 lat (usec) : 500=94.64%, 750=5.02%, 1000=0.21% 00:19:17.909 lat (msec) : 2=0.09%, 20=0.01%, 50=0.01% 00:19:17.909 cpu : usr=2.10%, sys=4.23%, ctx=8668, majf=0, minf=1 00:19:17.909 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:17.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.909 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.909 issued rwts: total=8662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.909 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:17.909 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2174: Wed Apr 17 06:45:22 2024 00:19:17.909 read: IOPS=24, BW=96.4KiB/s (98.7kB/s)(352KiB/3651msec) 00:19:17.909 slat (usec): min=11, max=10858, avg=240.29, stdev=1475.96 00:19:17.909 clat (usec): min=562, max=46670, avg=41225.80, stdev=4442.98 00:19:17.909 lat (usec): min=582, max=51985, avg=41468.40, stdev=4700.62 00:19:17.909 clat percentiles (usec): 00:19:17.909 | 1.00th=[ 562], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:17.909 | 30.00th=[41157], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:19:17.909 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:17.909 | 99.00th=[46924], 99.50th=[46924], 99.90th=[46924], 99.95th=[46924], 00:19:17.909 | 99.99th=[46924] 00:19:17.909 bw ( KiB/s): min= 93, max= 104, per=0.91%, avg=96.71, stdev= 3.40, samples=7 00:19:17.909 iops : min= 23, max= 26, avg=24.14, stdev= 0.90, samples=7 00:19:17.909 lat (usec) : 750=1.12% 00:19:17.909 lat (msec) : 50=97.75% 00:19:17.909 cpu : usr=0.08%, sys=0.00%, ctx=91, majf=0, minf=1 00:19:17.909 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:17.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.909 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.909 issued rwts: total=89,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.909 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:17.909 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2175: Wed Apr 17 06:45:22 2024 00:19:17.909 read: IOPS=27, BW=109KiB/s (111kB/s)(340KiB/3131msec) 00:19:17.909 slat (nsec): min=7504, max=46050, avg=20754.27, stdev=9378.56 00:19:17.909 clat (usec): min=381, max=42975, avg=36798.52, stdev=13329.47 00:19:17.909 lat (usec): min=399, max=42994, avg=36819.36, stdev=13331.50 00:19:17.909 clat percentiles (usec): 00:19:17.909 | 1.00th=[ 383], 5.00th=[ 523], 10.00th=[ 635], 20.00th=[41157], 00:19:17.909 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:19:17.909 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:17.909 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:19:17.909 | 99.99th=[42730] 00:19:17.909 bw ( KiB/s): min= 96, max= 152, per=1.02%, avg=108.00, stdev=22.49, samples=6 00:19:17.909 iops : min= 24, max= 38, avg=27.00, stdev= 5.62, samples=6 00:19:17.909 lat (usec) : 500=3.49%, 750=8.14% 00:19:17.909 lat (msec) : 50=87.21% 00:19:17.909 cpu : usr=0.00%, sys=0.10%, ctx=86, majf=0, minf=1 00:19:17.909 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:17.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.909 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.909 issued rwts: total=86,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.909 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:17.909 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2176: Wed Apr 17 06:45:22 2024 00:19:17.909 read: IOPS=289, BW=1156KiB/s (1183kB/s)(3340KiB/2890msec) 00:19:17.909 slat (nsec): min=4589, max=44465, avg=13616.96, stdev=5348.67 00:19:17.909 clat (usec): min=292, max=42084, avg=3443.16, stdev=10618.47 00:19:17.909 lat (usec): min=297, max=42128, avg=3456.78, stdev=10620.61 00:19:17.909 clat percentiles (usec): 00:19:17.909 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 338], 20.00th=[ 429], 00:19:17.909 | 30.00th=[ 457], 40.00th=[ 465], 50.00th=[ 474], 60.00th=[ 486], 00:19:17.909 | 70.00th=[ 510], 80.00th=[ 537], 90.00th=[ 611], 95.00th=[41157], 00:19:17.909 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:17.909 | 99.99th=[42206] 00:19:17.909 bw ( KiB/s): min= 96, max= 5312, per=11.61%, avg=1230.40, stdev=2289.44, samples=5 00:19:17.909 iops : min= 24, max= 1328, avg=307.60, stdev=572.36, samples=5 00:19:17.909 lat (usec) : 500=64.00%, 750=28.35%, 1000=0.12% 00:19:17.909 lat (msec) : 2=0.12%, 50=7.30% 00:19:17.909 cpu : usr=0.31%, sys=0.31%, ctx=838, majf=0, minf=1 00:19:17.909 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:17.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.909 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.909 issued rwts: total=836,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.909 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:17.909 00:19:17.909 Run status group 0 (all jobs): 00:19:17.909 READ: bw=10.3MiB/s (10.8MB/s), 96.4KiB/s-10.0MiB/s (98.7kB/s-10.5MB/s), io=37.8MiB (39.6MB), run=2890-3651msec 00:19:17.909 00:19:17.909 Disk stats (read/write): 00:19:17.909 nvme0n1: ios=8626/0, merge=0/0, ticks=3645/0, in_queue=3645, util=98.31% 00:19:17.909 nvme0n2: ios=87/0, merge=0/0, ticks=3583/0, in_queue=3583, util=96.09% 00:19:17.909 nvme0n3: ios=84/0, merge=0/0, ticks=3088/0, in_queue=3088, util=96.76% 00:19:17.909 nvme0n4: ios=885/0, merge=0/0, ticks=3950/0, in_queue=3950, util=99.19% 00:19:18.167 06:45:22 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:18.167 06:45:22 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:18.425 06:45:22 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:18.425 06:45:22 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:18.682 06:45:23 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:18.682 06:45:23 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:18.939 06:45:23 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:18.940 06:45:23 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:19.196 06:45:23 -- target/fio.sh@69 -- # fio_status=0 00:19:19.196 06:45:23 -- target/fio.sh@70 -- # wait 2065 00:19:19.196 06:45:23 -- target/fio.sh@70 -- # fio_status=4 00:19:19.196 06:45:23 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:19.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:19.454 06:45:23 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:19.454 06:45:23 -- common/autotest_common.sh@1205 -- # local i=0 00:19:19.454 06:45:23 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:19:19.454 06:45:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:19.454 06:45:23 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:19:19.454 06:45:23 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:19.454 06:45:23 -- common/autotest_common.sh@1217 -- # return 0 00:19:19.454 06:45:23 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:19.454 06:45:23 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:19.454 nvmf hotplug test: fio failed as expected 00:19:19.454 06:45:23 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:19.712 06:45:24 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:19.712 06:45:24 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:19.712 06:45:24 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:19.712 06:45:24 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:19.712 06:45:24 -- target/fio.sh@91 -- # nvmftestfini 00:19:19.712 06:45:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:19.712 06:45:24 -- nvmf/common.sh@117 -- # sync 00:19:19.712 06:45:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:19.712 06:45:24 -- nvmf/common.sh@120 -- # set +e 00:19:19.712 06:45:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:19.712 06:45:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:19.712 rmmod nvme_tcp 00:19:19.712 rmmod nvme_fabrics 00:19:19.712 rmmod nvme_keyring 00:19:19.712 06:45:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:19.712 06:45:24 -- nvmf/common.sh@124 -- # set -e 00:19:19.712 06:45:24 -- nvmf/common.sh@125 -- # return 0 00:19:19.712 06:45:24 -- nvmf/common.sh@478 -- # '[' -n 4193433 ']' 00:19:19.712 06:45:24 -- nvmf/common.sh@479 -- # killprocess 4193433 00:19:19.712 06:45:24 -- common/autotest_common.sh@936 -- # '[' -z 4193433 ']' 00:19:19.712 06:45:24 -- common/autotest_common.sh@940 -- # kill -0 4193433 00:19:19.712 06:45:24 -- common/autotest_common.sh@941 -- # uname 00:19:19.712 06:45:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:19.712 06:45:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4193433 00:19:19.712 06:45:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:19.712 06:45:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:19.712 06:45:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4193433' 00:19:19.712 killing process with pid 4193433 00:19:19.712 06:45:24 -- common/autotest_common.sh@955 -- # kill 4193433 00:19:19.712 06:45:24 -- common/autotest_common.sh@960 -- # wait 4193433 00:19:19.970 06:45:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:19.970 06:45:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:19.970 06:45:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:19.970 06:45:24 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:19.970 06:45:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:19.970 06:45:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.970 06:45:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:19.970 06:45:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.503 06:45:26 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:22.503 00:19:22.503 real 0m23.331s 00:19:22.503 user 1m20.179s 00:19:22.503 sys 0m6.730s 00:19:22.503 06:45:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:22.503 06:45:26 -- common/autotest_common.sh@10 -- # set +x 00:19:22.503 ************************************ 00:19:22.503 END TEST nvmf_fio_target 00:19:22.503 ************************************ 00:19:22.503 06:45:26 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:22.503 06:45:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:22.503 06:45:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:22.503 06:45:26 -- common/autotest_common.sh@10 -- # set +x 00:19:22.503 ************************************ 00:19:22.503 START TEST nvmf_bdevio 00:19:22.503 ************************************ 00:19:22.503 06:45:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:22.503 * Looking for test storage... 00:19:22.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:22.504 06:45:26 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:22.504 06:45:26 -- nvmf/common.sh@7 -- # uname -s 00:19:22.504 06:45:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:22.504 06:45:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:22.504 06:45:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:22.504 06:45:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:22.504 06:45:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:22.504 06:45:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:22.504 06:45:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:22.504 06:45:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:22.504 06:45:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:22.504 06:45:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:22.504 06:45:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:22.504 06:45:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:22.504 06:45:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:22.504 06:45:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:22.504 06:45:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:22.504 06:45:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:22.504 06:45:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:22.504 06:45:26 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:22.504 06:45:26 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:22.504 06:45:26 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:22.504 06:45:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.504 06:45:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.504 06:45:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.504 06:45:26 -- paths/export.sh@5 -- # export PATH 00:19:22.504 06:45:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.504 06:45:26 -- nvmf/common.sh@47 -- # : 0 00:19:22.504 06:45:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:22.504 06:45:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:22.504 06:45:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:22.504 06:45:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:22.504 06:45:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:22.504 06:45:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:22.504 06:45:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:22.504 06:45:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:22.504 06:45:26 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:22.504 06:45:26 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:22.504 06:45:26 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:22.504 06:45:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:22.504 06:45:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:22.504 06:45:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:22.504 06:45:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:22.504 06:45:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:22.504 06:45:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.504 06:45:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:22.504 06:45:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.504 06:45:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:22.504 06:45:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:22.504 06:45:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:22.504 06:45:26 -- common/autotest_common.sh@10 -- # set +x 00:19:24.409 06:45:28 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:24.409 06:45:28 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:24.409 06:45:28 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:24.409 06:45:28 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:24.409 06:45:28 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:24.409 06:45:28 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:24.409 06:45:28 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:24.409 06:45:28 -- nvmf/common.sh@295 -- # net_devs=() 00:19:24.409 06:45:28 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:24.409 06:45:28 -- nvmf/common.sh@296 -- # e810=() 00:19:24.409 06:45:28 -- nvmf/common.sh@296 -- # local -ga e810 00:19:24.409 06:45:28 -- nvmf/common.sh@297 -- # x722=() 00:19:24.409 06:45:28 -- nvmf/common.sh@297 -- # local -ga x722 00:19:24.409 06:45:28 -- nvmf/common.sh@298 -- # mlx=() 00:19:24.409 06:45:28 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:24.409 06:45:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:24.409 06:45:28 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:24.409 06:45:28 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:24.409 06:45:28 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:24.409 06:45:28 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:24.409 06:45:28 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:24.409 06:45:28 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:24.409 06:45:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:24.409 06:45:28 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:24.409 06:45:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:24.409 06:45:28 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:24.409 06:45:28 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:24.409 06:45:28 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:24.409 06:45:28 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:24.409 06:45:28 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:24.409 06:45:28 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:24.409 06:45:28 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:24.409 06:45:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:24.409 06:45:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:24.409 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:24.409 06:45:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:24.409 06:45:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:24.409 06:45:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.409 06:45:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.409 06:45:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:24.409 06:45:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:24.409 06:45:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:24.409 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:24.409 06:45:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:24.409 06:45:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:24.409 06:45:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.409 06:45:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.409 06:45:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:24.409 06:45:28 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:24.409 06:45:28 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:24.409 06:45:28 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:24.409 06:45:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:24.409 06:45:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.409 06:45:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:24.409 06:45:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.409 06:45:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:24.409 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:24.409 06:45:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.409 06:45:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:24.409 06:45:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.409 06:45:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:24.409 06:45:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.409 06:45:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:24.409 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:24.409 06:45:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.409 06:45:28 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:24.409 06:45:28 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:24.409 06:45:28 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:24.409 06:45:28 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:24.409 06:45:28 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:24.409 06:45:28 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:24.409 06:45:28 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:24.409 06:45:28 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:24.409 06:45:28 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:24.409 06:45:28 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:24.409 06:45:28 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:24.409 06:45:28 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:24.409 06:45:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:24.409 06:45:28 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:24.409 06:45:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:24.409 06:45:28 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:24.409 06:45:28 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:24.409 06:45:28 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:24.409 06:45:28 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:24.409 06:45:28 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:24.409 06:45:28 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:24.409 06:45:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:24.409 06:45:28 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:24.409 06:45:28 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:24.409 06:45:28 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:24.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:24.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:19:24.409 00:19:24.409 --- 10.0.0.2 ping statistics --- 00:19:24.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.409 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:19:24.409 06:45:28 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:24.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:24.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:19:24.409 00:19:24.409 --- 10.0.0.1 ping statistics --- 00:19:24.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.409 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:24.409 06:45:28 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:24.409 06:45:28 -- nvmf/common.sh@411 -- # return 0 00:19:24.409 06:45:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:24.409 06:45:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:24.409 06:45:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:24.409 06:45:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:24.409 06:45:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:24.409 06:45:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:24.409 06:45:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:24.409 06:45:28 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:24.409 06:45:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:24.409 06:45:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:24.409 06:45:28 -- common/autotest_common.sh@10 -- # set +x 00:19:24.409 06:45:28 -- nvmf/common.sh@470 -- # nvmfpid=4903 00:19:24.409 06:45:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:24.409 06:45:28 -- nvmf/common.sh@471 -- # waitforlisten 4903 00:19:24.409 06:45:28 -- common/autotest_common.sh@817 -- # '[' -z 4903 ']' 00:19:24.409 06:45:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.409 06:45:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:24.410 06:45:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.410 06:45:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:24.410 06:45:28 -- common/autotest_common.sh@10 -- # set +x 00:19:24.410 [2024-04-17 06:45:28.928708] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:19:24.410 [2024-04-17 06:45:28.928784] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.410 EAL: No free 2048 kB hugepages reported on node 1 00:19:24.410 [2024-04-17 06:45:28.993976] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:24.696 [2024-04-17 06:45:29.086019] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.696 [2024-04-17 06:45:29.086076] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.696 [2024-04-17 06:45:29.086089] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.696 [2024-04-17 06:45:29.086100] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.696 [2024-04-17 06:45:29.086110] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.696 [2024-04-17 06:45:29.086203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:24.696 [2024-04-17 06:45:29.086266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:24.696 [2024-04-17 06:45:29.086333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:24.696 [2024-04-17 06:45:29.086336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:24.696 06:45:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:24.696 06:45:29 -- common/autotest_common.sh@850 -- # return 0 00:19:24.696 06:45:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:24.696 06:45:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:24.696 06:45:29 -- common/autotest_common.sh@10 -- # set +x 00:19:24.696 06:45:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.696 06:45:29 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:24.696 06:45:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.696 06:45:29 -- common/autotest_common.sh@10 -- # set +x 00:19:24.696 [2024-04-17 06:45:29.242968] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:24.696 06:45:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.696 06:45:29 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:24.696 06:45:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.696 06:45:29 -- common/autotest_common.sh@10 -- # set +x 00:19:24.696 Malloc0 00:19:24.696 06:45:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.696 06:45:29 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:24.696 06:45:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.696 06:45:29 -- common/autotest_common.sh@10 -- # set +x 00:19:24.696 06:45:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.696 06:45:29 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:24.696 06:45:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.954 06:45:29 -- common/autotest_common.sh@10 -- # set +x 00:19:24.954 06:45:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.954 06:45:29 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:24.954 06:45:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.954 06:45:29 -- common/autotest_common.sh@10 -- # set +x 00:19:24.954 [2024-04-17 06:45:29.296529] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:24.954 06:45:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.954 06:45:29 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:24.954 06:45:29 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:24.954 06:45:29 -- nvmf/common.sh@521 -- # config=() 00:19:24.954 06:45:29 -- nvmf/common.sh@521 -- # local subsystem config 00:19:24.954 06:45:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:24.954 06:45:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:24.954 { 00:19:24.954 "params": { 00:19:24.954 "name": "Nvme$subsystem", 00:19:24.954 "trtype": "$TEST_TRANSPORT", 00:19:24.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:24.954 "adrfam": "ipv4", 00:19:24.954 "trsvcid": "$NVMF_PORT", 00:19:24.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:24.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:24.954 "hdgst": ${hdgst:-false}, 00:19:24.954 "ddgst": ${ddgst:-false} 00:19:24.954 }, 00:19:24.954 "method": "bdev_nvme_attach_controller" 00:19:24.954 } 00:19:24.954 EOF 00:19:24.954 )") 00:19:24.954 06:45:29 -- nvmf/common.sh@543 -- # cat 00:19:24.954 06:45:29 -- nvmf/common.sh@545 -- # jq . 00:19:24.954 06:45:29 -- nvmf/common.sh@546 -- # IFS=, 00:19:24.954 06:45:29 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:24.954 "params": { 00:19:24.954 "name": "Nvme1", 00:19:24.954 "trtype": "tcp", 00:19:24.954 "traddr": "10.0.0.2", 00:19:24.954 "adrfam": "ipv4", 00:19:24.954 "trsvcid": "4420", 00:19:24.954 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.954 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:24.954 "hdgst": false, 00:19:24.954 "ddgst": false 00:19:24.954 }, 00:19:24.954 "method": "bdev_nvme_attach_controller" 00:19:24.954 }' 00:19:24.954 [2024-04-17 06:45:29.343547] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:19:24.954 [2024-04-17 06:45:29.343631] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4931 ] 00:19:24.954 EAL: No free 2048 kB hugepages reported on node 1 00:19:24.954 [2024-04-17 06:45:29.409825] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:24.954 [2024-04-17 06:45:29.498183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.954 [2024-04-17 06:45:29.498207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.954 [2024-04-17 06:45:29.498211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.954 [2024-04-17 06:45:29.507058] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:19:25.212 I/O targets: 00:19:25.212 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:25.212 00:19:25.212 00:19:25.212 CUnit - A unit testing framework for C - Version 2.1-3 00:19:25.212 http://cunit.sourceforge.net/ 00:19:25.212 00:19:25.212 00:19:25.212 Suite: bdevio tests on: Nvme1n1 00:19:25.212 Test: blockdev write read block ...passed 00:19:25.212 Test: blockdev write zeroes read block ...passed 00:19:25.212 Test: blockdev write zeroes read no split ...passed 00:19:25.212 Test: blockdev write zeroes read split ...passed 00:19:25.470 Test: blockdev write zeroes read split partial ...passed 00:19:25.470 Test: blockdev reset ...[2024-04-17 06:45:29.872064] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:25.470 [2024-04-17 06:45:29.872187] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa69de0 (9): Bad file descriptor 00:19:25.470 [2024-04-17 06:45:29.968667] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:25.470 passed 00:19:25.470 Test: blockdev write read 8 blocks ...passed 00:19:25.470 Test: blockdev write read size > 128k ...passed 00:19:25.470 Test: blockdev write read invalid size ...passed 00:19:25.470 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:25.470 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:25.470 Test: blockdev write read max offset ...passed 00:19:25.728 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:25.728 Test: blockdev writev readv 8 blocks ...passed 00:19:25.728 Test: blockdev writev readv 30 x 1block ...passed 00:19:25.728 Test: blockdev writev readv block ...passed 00:19:25.728 Test: blockdev writev readv size > 128k ...passed 00:19:25.728 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:25.728 Test: blockdev comparev and writev ...[2024-04-17 06:45:30.184314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.728 [2024-04-17 06:45:30.184357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:25.728 [2024-04-17 06:45:30.184382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.728 [2024-04-17 06:45:30.184398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.728 [2024-04-17 06:45:30.184786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.728 [2024-04-17 06:45:30.184810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:25.728 [2024-04-17 06:45:30.184832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.728 [2024-04-17 06:45:30.184848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:25.728 [2024-04-17 06:45:30.185202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.728 [2024-04-17 06:45:30.185226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:25.728 [2024-04-17 06:45:30.185247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.728 [2024-04-17 06:45:30.185262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:25.728 [2024-04-17 06:45:30.185626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.728 [2024-04-17 06:45:30.185651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:25.728 [2024-04-17 06:45:30.185672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.728 [2024-04-17 06:45:30.185688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:25.728 passed 00:19:25.728 Test: blockdev nvme passthru rw ...passed 00:19:25.728 Test: blockdev nvme passthru vendor specific ...[2024-04-17 06:45:30.269542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:25.728 [2024-04-17 06:45:30.269569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:25.728 [2024-04-17 06:45:30.269754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:25.728 [2024-04-17 06:45:30.269784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:25.728 [2024-04-17 06:45:30.269965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:25.728 [2024-04-17 06:45:30.269988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:25.728 [2024-04-17 06:45:30.270163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:25.728 [2024-04-17 06:45:30.270194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:25.728 passed 00:19:25.728 Test: blockdev nvme admin passthru ...passed 00:19:25.728 Test: blockdev copy ...passed 00:19:25.728 00:19:25.728 Run Summary: Type Total Ran Passed Failed Inactive 00:19:25.728 suites 1 1 n/a 0 0 00:19:25.728 tests 23 23 23 0 0 00:19:25.728 asserts 152 152 152 0 n/a 00:19:25.728 00:19:25.728 Elapsed time = 1.314 seconds 00:19:25.986 06:45:30 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:25.986 06:45:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.986 06:45:30 -- common/autotest_common.sh@10 -- # set +x 00:19:25.986 06:45:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.986 06:45:30 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:25.986 06:45:30 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:25.986 06:45:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:25.986 06:45:30 -- nvmf/common.sh@117 -- # sync 00:19:25.986 06:45:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:25.986 06:45:30 -- nvmf/common.sh@120 -- # set +e 00:19:25.986 06:45:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:25.986 06:45:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:25.986 rmmod nvme_tcp 00:19:25.986 rmmod nvme_fabrics 00:19:25.986 rmmod nvme_keyring 00:19:25.986 06:45:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:25.986 06:45:30 -- nvmf/common.sh@124 -- # set -e 00:19:25.986 06:45:30 -- nvmf/common.sh@125 -- # return 0 00:19:25.986 06:45:30 -- nvmf/common.sh@478 -- # '[' -n 4903 ']' 00:19:25.986 06:45:30 -- nvmf/common.sh@479 -- # killprocess 4903 00:19:25.986 06:45:30 -- common/autotest_common.sh@936 -- # '[' -z 4903 ']' 00:19:25.986 06:45:30 -- common/autotest_common.sh@940 -- # kill -0 4903 00:19:25.986 06:45:30 -- common/autotest_common.sh@941 -- # uname 00:19:25.986 06:45:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:25.986 06:45:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4903 00:19:26.244 06:45:30 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:19:26.244 06:45:30 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:19:26.244 06:45:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4903' 00:19:26.244 killing process with pid 4903 00:19:26.244 06:45:30 -- common/autotest_common.sh@955 -- # kill 4903 00:19:26.244 06:45:30 -- common/autotest_common.sh@960 -- # wait 4903 00:19:26.502 06:45:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:26.502 06:45:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:26.502 06:45:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:26.502 06:45:30 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:26.502 06:45:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:26.502 06:45:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.502 06:45:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:26.502 06:45:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.401 06:45:32 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:28.401 00:19:28.401 real 0m6.282s 00:19:28.401 user 0m9.890s 00:19:28.401 sys 0m2.121s 00:19:28.401 06:45:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:28.401 06:45:32 -- common/autotest_common.sh@10 -- # set +x 00:19:28.401 ************************************ 00:19:28.401 END TEST nvmf_bdevio 00:19:28.401 ************************************ 00:19:28.401 06:45:32 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:19:28.401 06:45:32 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:28.401 06:45:32 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:19:28.401 06:45:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:28.401 06:45:32 -- common/autotest_common.sh@10 -- # set +x 00:19:28.401 ************************************ 00:19:28.401 START TEST nvmf_bdevio_no_huge 00:19:28.401 ************************************ 00:19:28.401 06:45:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:28.659 * Looking for test storage... 00:19:28.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:28.659 06:45:33 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:28.659 06:45:33 -- nvmf/common.sh@7 -- # uname -s 00:19:28.659 06:45:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:28.659 06:45:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:28.659 06:45:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:28.659 06:45:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:28.659 06:45:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:28.659 06:45:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:28.659 06:45:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:28.659 06:45:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:28.659 06:45:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:28.659 06:45:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:28.659 06:45:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.659 06:45:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.659 06:45:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:28.659 06:45:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:28.659 06:45:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:28.659 06:45:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:28.659 06:45:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:28.659 06:45:33 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:28.659 06:45:33 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:28.659 06:45:33 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:28.660 06:45:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.660 06:45:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.660 06:45:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.660 06:45:33 -- paths/export.sh@5 -- # export PATH 00:19:28.660 06:45:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.660 06:45:33 -- nvmf/common.sh@47 -- # : 0 00:19:28.660 06:45:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:28.660 06:45:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:28.660 06:45:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:28.660 06:45:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:28.660 06:45:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:28.660 06:45:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:28.660 06:45:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:28.660 06:45:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:28.660 06:45:33 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:28.660 06:45:33 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:28.660 06:45:33 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:28.660 06:45:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:28.660 06:45:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:28.660 06:45:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:28.660 06:45:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:28.660 06:45:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:28.660 06:45:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.660 06:45:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:28.660 06:45:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.660 06:45:33 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:28.660 06:45:33 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:28.660 06:45:33 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:28.660 06:45:33 -- common/autotest_common.sh@10 -- # set +x 00:19:30.557 06:45:35 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:30.557 06:45:35 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:30.557 06:45:35 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:30.557 06:45:35 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:30.558 06:45:35 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:30.558 06:45:35 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:30.558 06:45:35 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:30.558 06:45:35 -- nvmf/common.sh@295 -- # net_devs=() 00:19:30.558 06:45:35 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:30.558 06:45:35 -- nvmf/common.sh@296 -- # e810=() 00:19:30.558 06:45:35 -- nvmf/common.sh@296 -- # local -ga e810 00:19:30.558 06:45:35 -- nvmf/common.sh@297 -- # x722=() 00:19:30.558 06:45:35 -- nvmf/common.sh@297 -- # local -ga x722 00:19:30.558 06:45:35 -- nvmf/common.sh@298 -- # mlx=() 00:19:30.558 06:45:35 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:30.558 06:45:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:30.558 06:45:35 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:30.558 06:45:35 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:30.558 06:45:35 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:30.558 06:45:35 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:30.558 06:45:35 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:30.558 06:45:35 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:30.558 06:45:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:30.558 06:45:35 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:30.558 06:45:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:30.558 06:45:35 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:30.558 06:45:35 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:30.558 06:45:35 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:30.558 06:45:35 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:30.558 06:45:35 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:30.558 06:45:35 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:30.558 06:45:35 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:30.558 06:45:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:30.558 06:45:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:30.558 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:30.558 06:45:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:30.558 06:45:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:30.558 06:45:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.558 06:45:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.558 06:45:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:30.558 06:45:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:30.558 06:45:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:30.558 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:30.558 06:45:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:30.558 06:45:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:30.558 06:45:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.558 06:45:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.558 06:45:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:30.558 06:45:35 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:30.558 06:45:35 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:30.558 06:45:35 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:30.558 06:45:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:30.558 06:45:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.558 06:45:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:30.558 06:45:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.558 06:45:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:30.558 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:30.558 06:45:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.558 06:45:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:30.558 06:45:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.558 06:45:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:30.558 06:45:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.558 06:45:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:30.558 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:30.558 06:45:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.558 06:45:35 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:30.558 06:45:35 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:30.558 06:45:35 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:30.558 06:45:35 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:30.558 06:45:35 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:30.558 06:45:35 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.558 06:45:35 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:30.558 06:45:35 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:30.558 06:45:35 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:30.558 06:45:35 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:30.558 06:45:35 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:30.558 06:45:35 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:30.558 06:45:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:30.558 06:45:35 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.558 06:45:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:30.558 06:45:35 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:30.558 06:45:35 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:30.558 06:45:35 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:30.558 06:45:35 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:30.558 06:45:35 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:30.558 06:45:35 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:30.558 06:45:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:30.816 06:45:35 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:30.816 06:45:35 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:30.816 06:45:35 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:30.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:30.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:19:30.816 00:19:30.816 --- 10.0.0.2 ping statistics --- 00:19:30.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.816 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:19:30.816 06:45:35 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:30.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:30.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:19:30.816 00:19:30.816 --- 10.0.0.1 ping statistics --- 00:19:30.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.816 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:19:30.816 06:45:35 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:30.816 06:45:35 -- nvmf/common.sh@411 -- # return 0 00:19:30.816 06:45:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:30.816 06:45:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:30.816 06:45:35 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:30.816 06:45:35 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:30.816 06:45:35 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:30.816 06:45:35 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:30.816 06:45:35 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:30.816 06:45:35 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:30.816 06:45:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:30.816 06:45:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:30.816 06:45:35 -- common/autotest_common.sh@10 -- # set +x 00:19:30.816 06:45:35 -- nvmf/common.sh@470 -- # nvmfpid=7126 00:19:30.816 06:45:35 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:30.816 06:45:35 -- nvmf/common.sh@471 -- # waitforlisten 7126 00:19:30.816 06:45:35 -- common/autotest_common.sh@817 -- # '[' -z 7126 ']' 00:19:30.816 06:45:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.816 06:45:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:30.816 06:45:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.816 06:45:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:30.816 06:45:35 -- common/autotest_common.sh@10 -- # set +x 00:19:30.816 [2024-04-17 06:45:35.274902] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:19:30.816 [2024-04-17 06:45:35.274995] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:30.816 [2024-04-17 06:45:35.346136] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:31.074 [2024-04-17 06:45:35.434270] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.074 [2024-04-17 06:45:35.434326] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.074 [2024-04-17 06:45:35.434352] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:31.074 [2024-04-17 06:45:35.434365] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:31.074 [2024-04-17 06:45:35.434376] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.074 [2024-04-17 06:45:35.434476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:31.074 [2024-04-17 06:45:35.434542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:31.074 [2024-04-17 06:45:35.434597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:31.074 [2024-04-17 06:45:35.434600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:31.074 06:45:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:31.074 06:45:35 -- common/autotest_common.sh@850 -- # return 0 00:19:31.074 06:45:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:31.074 06:45:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:31.074 06:45:35 -- common/autotest_common.sh@10 -- # set +x 00:19:31.074 06:45:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.074 06:45:35 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:31.074 06:45:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.074 06:45:35 -- common/autotest_common.sh@10 -- # set +x 00:19:31.074 [2024-04-17 06:45:35.562351] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.074 06:45:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.074 06:45:35 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:31.074 06:45:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.074 06:45:35 -- common/autotest_common.sh@10 -- # set +x 00:19:31.074 Malloc0 00:19:31.074 06:45:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.074 06:45:35 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:31.074 06:45:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.074 06:45:35 -- common/autotest_common.sh@10 -- # set +x 00:19:31.074 06:45:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.074 06:45:35 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:31.074 06:45:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.074 06:45:35 -- common/autotest_common.sh@10 -- # set +x 00:19:31.074 06:45:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.074 06:45:35 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:31.074 06:45:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.074 06:45:35 -- common/autotest_common.sh@10 -- # set +x 00:19:31.074 [2024-04-17 06:45:35.600455] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.074 06:45:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.075 06:45:35 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:31.075 06:45:35 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:31.075 06:45:35 -- nvmf/common.sh@521 -- # config=() 00:19:31.075 06:45:35 -- nvmf/common.sh@521 -- # local subsystem config 00:19:31.075 06:45:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:31.075 06:45:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:31.075 { 00:19:31.075 "params": { 00:19:31.075 "name": "Nvme$subsystem", 00:19:31.075 "trtype": "$TEST_TRANSPORT", 00:19:31.075 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.075 "adrfam": "ipv4", 00:19:31.075 "trsvcid": "$NVMF_PORT", 00:19:31.075 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.075 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.075 "hdgst": ${hdgst:-false}, 00:19:31.075 "ddgst": ${ddgst:-false} 00:19:31.075 }, 00:19:31.075 "method": "bdev_nvme_attach_controller" 00:19:31.075 } 00:19:31.075 EOF 00:19:31.075 )") 00:19:31.075 06:45:35 -- nvmf/common.sh@543 -- # cat 00:19:31.075 06:45:35 -- nvmf/common.sh@545 -- # jq . 00:19:31.075 06:45:35 -- nvmf/common.sh@546 -- # IFS=, 00:19:31.075 06:45:35 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:31.075 "params": { 00:19:31.075 "name": "Nvme1", 00:19:31.075 "trtype": "tcp", 00:19:31.075 "traddr": "10.0.0.2", 00:19:31.075 "adrfam": "ipv4", 00:19:31.075 "trsvcid": "4420", 00:19:31.075 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.075 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:31.075 "hdgst": false, 00:19:31.075 "ddgst": false 00:19:31.075 }, 00:19:31.075 "method": "bdev_nvme_attach_controller" 00:19:31.075 }' 00:19:31.075 [2024-04-17 06:45:35.643325] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:19:31.075 [2024-04-17 06:45:35.643395] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid7158 ] 00:19:31.332 [2024-04-17 06:45:35.703545] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:31.332 [2024-04-17 06:45:35.788737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.332 [2024-04-17 06:45:35.788789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.332 [2024-04-17 06:45:35.788792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.332 [2024-04-17 06:45:35.797720] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:19:31.589 I/O targets: 00:19:31.590 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:31.590 00:19:31.590 00:19:31.590 CUnit - A unit testing framework for C - Version 2.1-3 00:19:31.590 http://cunit.sourceforge.net/ 00:19:31.590 00:19:31.590 00:19:31.590 Suite: bdevio tests on: Nvme1n1 00:19:31.590 Test: blockdev write read block ...passed 00:19:31.590 Test: blockdev write zeroes read block ...passed 00:19:31.847 Test: blockdev write zeroes read no split ...passed 00:19:31.847 Test: blockdev write zeroes read split ...passed 00:19:31.847 Test: blockdev write zeroes read split partial ...passed 00:19:31.847 Test: blockdev reset ...[2024-04-17 06:45:36.316636] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:31.847 [2024-04-17 06:45:36.316751] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc22860 (9): Bad file descriptor 00:19:31.847 [2024-04-17 06:45:36.425385] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:31.847 passed 00:19:31.847 Test: blockdev write read 8 blocks ...passed 00:19:31.847 Test: blockdev write read size > 128k ...passed 00:19:31.847 Test: blockdev write read invalid size ...passed 00:19:32.104 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:32.105 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:32.105 Test: blockdev write read max offset ...passed 00:19:32.105 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:32.105 Test: blockdev writev readv 8 blocks ...passed 00:19:32.105 Test: blockdev writev readv 30 x 1block ...passed 00:19:32.105 Test: blockdev writev readv block ...passed 00:19:32.105 Test: blockdev writev readv size > 128k ...passed 00:19:32.105 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:32.105 Test: blockdev comparev and writev ...[2024-04-17 06:45:36.644479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:32.105 [2024-04-17 06:45:36.644515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:32.105 [2024-04-17 06:45:36.644539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:32.105 [2024-04-17 06:45:36.644556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.105 [2024-04-17 06:45:36.644923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:32.105 [2024-04-17 06:45:36.644948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:32.105 [2024-04-17 06:45:36.644978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:32.105 [2024-04-17 06:45:36.644994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:32.105 [2024-04-17 06:45:36.645366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:32.105 [2024-04-17 06:45:36.645390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:32.105 [2024-04-17 06:45:36.645412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:32.105 [2024-04-17 06:45:36.645428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:32.105 [2024-04-17 06:45:36.645790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:32.105 [2024-04-17 06:45:36.645814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:32.105 [2024-04-17 06:45:36.645835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:32.105 [2024-04-17 06:45:36.645851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:32.105 passed 00:19:32.363 Test: blockdev nvme passthru rw ...passed 00:19:32.363 Test: blockdev nvme passthru vendor specific ...[2024-04-17 06:45:36.729526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:32.363 [2024-04-17 06:45:36.729555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:32.363 [2024-04-17 06:45:36.729771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:32.363 [2024-04-17 06:45:36.729794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:32.363 [2024-04-17 06:45:36.730006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:32.363 [2024-04-17 06:45:36.730030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:32.363 [2024-04-17 06:45:36.730238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:32.363 [2024-04-17 06:45:36.730267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:32.363 passed 00:19:32.363 Test: blockdev nvme admin passthru ...passed 00:19:32.363 Test: blockdev copy ...passed 00:19:32.363 00:19:32.363 Run Summary: Type Total Ran Passed Failed Inactive 00:19:32.363 suites 1 1 n/a 0 0 00:19:32.363 tests 23 23 23 0 0 00:19:32.363 asserts 152 152 152 0 n/a 00:19:32.363 00:19:32.363 Elapsed time = 1.366 seconds 00:19:32.621 06:45:37 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:32.621 06:45:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:32.621 06:45:37 -- common/autotest_common.sh@10 -- # set +x 00:19:32.621 06:45:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:32.621 06:45:37 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:32.621 06:45:37 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:32.621 06:45:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:32.621 06:45:37 -- nvmf/common.sh@117 -- # sync 00:19:32.621 06:45:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:32.621 06:45:37 -- nvmf/common.sh@120 -- # set +e 00:19:32.621 06:45:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:32.621 06:45:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:32.621 rmmod nvme_tcp 00:19:32.621 rmmod nvme_fabrics 00:19:32.621 rmmod nvme_keyring 00:19:32.621 06:45:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:32.621 06:45:37 -- nvmf/common.sh@124 -- # set -e 00:19:32.621 06:45:37 -- nvmf/common.sh@125 -- # return 0 00:19:32.621 06:45:37 -- nvmf/common.sh@478 -- # '[' -n 7126 ']' 00:19:32.621 06:45:37 -- nvmf/common.sh@479 -- # killprocess 7126 00:19:32.621 06:45:37 -- common/autotest_common.sh@936 -- # '[' -z 7126 ']' 00:19:32.621 06:45:37 -- common/autotest_common.sh@940 -- # kill -0 7126 00:19:32.621 06:45:37 -- common/autotest_common.sh@941 -- # uname 00:19:32.621 06:45:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:32.621 06:45:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 7126 00:19:32.621 06:45:37 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:19:32.621 06:45:37 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:19:32.621 06:45:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 7126' 00:19:32.621 killing process with pid 7126 00:19:32.621 06:45:37 -- common/autotest_common.sh@955 -- # kill 7126 00:19:32.621 06:45:37 -- common/autotest_common.sh@960 -- # wait 7126 00:19:33.188 06:45:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:33.188 06:45:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:33.188 06:45:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:33.188 06:45:37 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:33.188 06:45:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:33.188 06:45:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.188 06:45:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:33.188 06:45:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.089 06:45:39 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:35.089 00:19:35.089 real 0m6.630s 00:19:35.089 user 0m11.647s 00:19:35.089 sys 0m2.522s 00:19:35.089 06:45:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:35.089 06:45:39 -- common/autotest_common.sh@10 -- # set +x 00:19:35.089 ************************************ 00:19:35.089 END TEST nvmf_bdevio_no_huge 00:19:35.089 ************************************ 00:19:35.089 06:45:39 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:35.089 06:45:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:35.089 06:45:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:35.089 06:45:39 -- common/autotest_common.sh@10 -- # set +x 00:19:35.348 ************************************ 00:19:35.348 START TEST nvmf_tls 00:19:35.348 ************************************ 00:19:35.348 06:45:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:35.348 * Looking for test storage... 00:19:35.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:35.348 06:45:39 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:35.348 06:45:39 -- nvmf/common.sh@7 -- # uname -s 00:19:35.348 06:45:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.348 06:45:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.348 06:45:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.348 06:45:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.348 06:45:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.348 06:45:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.348 06:45:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.348 06:45:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.348 06:45:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.348 06:45:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.348 06:45:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.348 06:45:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.348 06:45:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.348 06:45:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.348 06:45:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:35.348 06:45:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:35.348 06:45:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:35.348 06:45:39 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.348 06:45:39 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.348 06:45:39 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.348 06:45:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.349 06:45:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.349 06:45:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.349 06:45:39 -- paths/export.sh@5 -- # export PATH 00:19:35.349 06:45:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.349 06:45:39 -- nvmf/common.sh@47 -- # : 0 00:19:35.349 06:45:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:35.349 06:45:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:35.349 06:45:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:35.349 06:45:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.349 06:45:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.349 06:45:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:35.349 06:45:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:35.349 06:45:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:35.349 06:45:39 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:35.349 06:45:39 -- target/tls.sh@62 -- # nvmftestinit 00:19:35.349 06:45:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:35.349 06:45:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.349 06:45:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:35.349 06:45:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:35.349 06:45:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:35.349 06:45:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.349 06:45:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:35.349 06:45:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.349 06:45:39 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:35.349 06:45:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:35.349 06:45:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:35.349 06:45:39 -- common/autotest_common.sh@10 -- # set +x 00:19:37.249 06:45:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:37.249 06:45:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:37.249 06:45:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:37.249 06:45:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:37.249 06:45:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:37.249 06:45:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:37.249 06:45:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:37.249 06:45:41 -- nvmf/common.sh@295 -- # net_devs=() 00:19:37.249 06:45:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:37.249 06:45:41 -- nvmf/common.sh@296 -- # e810=() 00:19:37.249 06:45:41 -- nvmf/common.sh@296 -- # local -ga e810 00:19:37.249 06:45:41 -- nvmf/common.sh@297 -- # x722=() 00:19:37.249 06:45:41 -- nvmf/common.sh@297 -- # local -ga x722 00:19:37.249 06:45:41 -- nvmf/common.sh@298 -- # mlx=() 00:19:37.249 06:45:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:37.249 06:45:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:37.249 06:45:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:37.249 06:45:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:37.249 06:45:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:37.249 06:45:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:37.249 06:45:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:37.249 06:45:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:37.249 06:45:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:37.249 06:45:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:37.249 06:45:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:37.249 06:45:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:37.249 06:45:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:37.249 06:45:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:37.249 06:45:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:37.249 06:45:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:37.249 06:45:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:37.249 06:45:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:37.249 06:45:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:37.249 06:45:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:37.249 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:37.249 06:45:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:37.249 06:45:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:37.249 06:45:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.249 06:45:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.249 06:45:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:37.249 06:45:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:37.249 06:45:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:37.249 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:37.249 06:45:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:37.249 06:45:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:37.249 06:45:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.249 06:45:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.249 06:45:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:37.249 06:45:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:37.249 06:45:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:37.249 06:45:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:37.249 06:45:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:37.249 06:45:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.249 06:45:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:37.249 06:45:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.249 06:45:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:37.249 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:37.249 06:45:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.249 06:45:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:37.249 06:45:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.249 06:45:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:37.249 06:45:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.249 06:45:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:37.249 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:37.249 06:45:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.249 06:45:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:37.249 06:45:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:37.249 06:45:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:37.249 06:45:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:37.249 06:45:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:37.249 06:45:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:37.249 06:45:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:37.249 06:45:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:37.249 06:45:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:37.249 06:45:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:37.249 06:45:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:37.249 06:45:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:37.249 06:45:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:37.249 06:45:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:37.250 06:45:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:37.250 06:45:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:37.250 06:45:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:37.250 06:45:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:37.250 06:45:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:37.250 06:45:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:37.250 06:45:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:37.250 06:45:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:37.250 06:45:41 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:37.250 06:45:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:37.508 06:45:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:37.508 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:37.508 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:19:37.508 00:19:37.508 --- 10.0.0.2 ping statistics --- 00:19:37.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.508 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:19:37.508 06:45:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:37.508 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:37.508 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:19:37.508 00:19:37.508 --- 10.0.0.1 ping statistics --- 00:19:37.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.508 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:19:37.508 06:45:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:37.508 06:45:41 -- nvmf/common.sh@411 -- # return 0 00:19:37.508 06:45:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:37.508 06:45:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:37.508 06:45:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:37.508 06:45:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:37.508 06:45:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:37.508 06:45:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:37.508 06:45:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:37.508 06:45:41 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:37.508 06:45:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:37.509 06:45:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:37.509 06:45:41 -- common/autotest_common.sh@10 -- # set +x 00:19:37.509 06:45:41 -- nvmf/common.sh@470 -- # nvmfpid=9308 00:19:37.509 06:45:41 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:37.509 06:45:41 -- nvmf/common.sh@471 -- # waitforlisten 9308 00:19:37.509 06:45:41 -- common/autotest_common.sh@817 -- # '[' -z 9308 ']' 00:19:37.509 06:45:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.509 06:45:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:37.509 06:45:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.509 06:45:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:37.509 06:45:41 -- common/autotest_common.sh@10 -- # set +x 00:19:37.509 [2024-04-17 06:45:41.934166] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:19:37.509 [2024-04-17 06:45:41.934259] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:37.509 EAL: No free 2048 kB hugepages reported on node 1 00:19:37.509 [2024-04-17 06:45:42.004898] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.509 [2024-04-17 06:45:42.092146] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:37.509 [2024-04-17 06:45:42.092223] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:37.509 [2024-04-17 06:45:42.092237] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:37.509 [2024-04-17 06:45:42.092248] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:37.509 [2024-04-17 06:45:42.092258] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:37.509 [2024-04-17 06:45:42.092291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.766 06:45:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:37.766 06:45:42 -- common/autotest_common.sh@850 -- # return 0 00:19:37.766 06:45:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:37.766 06:45:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:37.766 06:45:42 -- common/autotest_common.sh@10 -- # set +x 00:19:37.766 06:45:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.766 06:45:42 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:19:37.766 06:45:42 -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:38.024 true 00:19:38.024 06:45:42 -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:38.024 06:45:42 -- target/tls.sh@73 -- # jq -r .tls_version 00:19:38.283 06:45:42 -- target/tls.sh@73 -- # version=0 00:19:38.283 06:45:42 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:19:38.283 06:45:42 -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:38.542 06:45:42 -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:38.542 06:45:42 -- target/tls.sh@81 -- # jq -r .tls_version 00:19:38.800 06:45:43 -- target/tls.sh@81 -- # version=13 00:19:38.800 06:45:43 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:19:38.800 06:45:43 -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:39.058 06:45:43 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:39.058 06:45:43 -- target/tls.sh@89 -- # jq -r .tls_version 00:19:39.315 06:45:43 -- target/tls.sh@89 -- # version=7 00:19:39.315 06:45:43 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:19:39.315 06:45:43 -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:39.315 06:45:43 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:19:39.599 06:45:44 -- target/tls.sh@96 -- # ktls=false 00:19:39.599 06:45:44 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:19:39.599 06:45:44 -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:39.857 06:45:44 -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:39.857 06:45:44 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:19:40.114 06:45:44 -- target/tls.sh@104 -- # ktls=true 00:19:40.114 06:45:44 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:19:40.114 06:45:44 -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:40.372 06:45:44 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:40.372 06:45:44 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:19:40.629 06:45:45 -- target/tls.sh@112 -- # ktls=false 00:19:40.629 06:45:45 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:19:40.630 06:45:45 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:40.630 06:45:45 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:40.630 06:45:45 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:40.630 06:45:45 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:19:40.630 06:45:45 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:19:40.630 06:45:45 -- nvmf/common.sh@693 -- # digest=1 00:19:40.630 06:45:45 -- nvmf/common.sh@694 -- # python - 00:19:40.630 06:45:45 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:40.630 06:45:45 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:40.630 06:45:45 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:40.630 06:45:45 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:40.630 06:45:45 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:19:40.630 06:45:45 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:19:40.630 06:45:45 -- nvmf/common.sh@693 -- # digest=1 00:19:40.630 06:45:45 -- nvmf/common.sh@694 -- # python - 00:19:40.630 06:45:45 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:40.630 06:45:45 -- target/tls.sh@121 -- # mktemp 00:19:40.630 06:45:45 -- target/tls.sh@121 -- # key_path=/tmp/tmp.2pLuAhnjXI 00:19:40.630 06:45:45 -- target/tls.sh@122 -- # mktemp 00:19:40.630 06:45:45 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.PQMv31NpnP 00:19:40.630 06:45:45 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:40.630 06:45:45 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:40.630 06:45:45 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.2pLuAhnjXI 00:19:40.630 06:45:45 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.PQMv31NpnP 00:19:40.630 06:45:45 -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:40.887 06:45:45 -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:41.145 06:45:45 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.2pLuAhnjXI 00:19:41.145 06:45:45 -- target/tls.sh@49 -- # local key=/tmp/tmp.2pLuAhnjXI 00:19:41.145 06:45:45 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:41.402 [2024-04-17 06:45:45.898281] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.402 06:45:45 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:41.660 06:45:46 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:41.918 [2024-04-17 06:45:46.443790] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:41.918 [2024-04-17 06:45:46.444038] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.918 06:45:46 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:42.176 malloc0 00:19:42.176 06:45:46 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:42.433 06:45:46 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2pLuAhnjXI 00:19:42.691 [2024-04-17 06:45:47.181431] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:42.691 06:45:47 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.2pLuAhnjXI 00:19:42.691 EAL: No free 2048 kB hugepages reported on node 1 00:19:54.890 Initializing NVMe Controllers 00:19:54.890 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:54.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:54.890 Initialization complete. Launching workers. 00:19:54.890 ======================================================== 00:19:54.890 Latency(us) 00:19:54.890 Device Information : IOPS MiB/s Average min max 00:19:54.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7842.87 30.64 8163.01 1104.26 9992.43 00:19:54.890 ======================================================== 00:19:54.890 Total : 7842.87 30.64 8163.01 1104.26 9992.43 00:19:54.890 00:19:54.890 06:45:57 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2pLuAhnjXI 00:19:54.890 06:45:57 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:54.890 06:45:57 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:54.890 06:45:57 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:54.890 06:45:57 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2pLuAhnjXI' 00:19:54.890 06:45:57 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:54.890 06:45:57 -- target/tls.sh@28 -- # bdevperf_pid=11131 00:19:54.890 06:45:57 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:54.890 06:45:57 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:54.890 06:45:57 -- target/tls.sh@31 -- # waitforlisten 11131 /var/tmp/bdevperf.sock 00:19:54.890 06:45:57 -- common/autotest_common.sh@817 -- # '[' -z 11131 ']' 00:19:54.890 06:45:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.890 06:45:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:54.890 06:45:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.890 06:45:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:54.890 06:45:57 -- common/autotest_common.sh@10 -- # set +x 00:19:54.890 [2024-04-17 06:45:57.336898] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:19:54.890 [2024-04-17 06:45:57.336988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid11131 ] 00:19:54.890 EAL: No free 2048 kB hugepages reported on node 1 00:19:54.890 [2024-04-17 06:45:57.396645] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.890 [2024-04-17 06:45:57.487659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.890 06:45:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:54.890 06:45:57 -- common/autotest_common.sh@850 -- # return 0 00:19:54.890 06:45:57 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2pLuAhnjXI 00:19:54.890 [2024-04-17 06:45:57.853507] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:54.890 [2024-04-17 06:45:57.853633] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:54.890 TLSTESTn1 00:19:54.890 06:45:57 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:54.890 Running I/O for 10 seconds... 00:20:04.855 00:20:04.855 Latency(us) 00:20:04.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.855 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:04.855 Verification LBA range: start 0x0 length 0x2000 00:20:04.855 TLSTESTn1 : 10.04 2619.85 10.23 0.00 0.00 48731.13 6189.51 71070.15 00:20:04.855 =================================================================================================================== 00:20:04.855 Total : 2619.85 10.23 0.00 0.00 48731.13 6189.51 71070.15 00:20:04.855 0 00:20:04.855 06:46:08 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:04.855 06:46:08 -- target/tls.sh@45 -- # killprocess 11131 00:20:04.855 06:46:08 -- common/autotest_common.sh@936 -- # '[' -z 11131 ']' 00:20:04.855 06:46:08 -- common/autotest_common.sh@940 -- # kill -0 11131 00:20:04.855 06:46:08 -- common/autotest_common.sh@941 -- # uname 00:20:04.855 06:46:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:04.855 06:46:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 11131 00:20:04.855 06:46:08 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:04.855 06:46:08 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:04.855 06:46:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 11131' 00:20:04.855 killing process with pid 11131 00:20:04.855 06:46:08 -- common/autotest_common.sh@955 -- # kill 11131 00:20:04.855 Received shutdown signal, test time was about 10.000000 seconds 00:20:04.855 00:20:04.855 Latency(us) 00:20:04.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.855 =================================================================================================================== 00:20:04.855 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:04.855 [2024-04-17 06:46:08.169983] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:04.855 06:46:08 -- common/autotest_common.sh@960 -- # wait 11131 00:20:04.855 06:46:08 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PQMv31NpnP 00:20:04.855 06:46:08 -- common/autotest_common.sh@638 -- # local es=0 00:20:04.855 06:46:08 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PQMv31NpnP 00:20:04.855 06:46:08 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:20:04.855 06:46:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:04.855 06:46:08 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:20:04.855 06:46:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:04.855 06:46:08 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PQMv31NpnP 00:20:04.855 06:46:08 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:04.855 06:46:08 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:04.855 06:46:08 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:04.856 06:46:08 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.PQMv31NpnP' 00:20:04.856 06:46:08 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:04.856 06:46:08 -- target/tls.sh@28 -- # bdevperf_pid=12442 00:20:04.856 06:46:08 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:04.856 06:46:08 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:04.856 06:46:08 -- target/tls.sh@31 -- # waitforlisten 12442 /var/tmp/bdevperf.sock 00:20:04.856 06:46:08 -- common/autotest_common.sh@817 -- # '[' -z 12442 ']' 00:20:04.856 06:46:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.856 06:46:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:04.856 06:46:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.856 06:46:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:04.856 06:46:08 -- common/autotest_common.sh@10 -- # set +x 00:20:04.856 [2024-04-17 06:46:08.438878] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:20:04.856 [2024-04-17 06:46:08.438946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid12442 ] 00:20:04.856 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.856 [2024-04-17 06:46:08.496649] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.856 [2024-04-17 06:46:08.578337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.856 06:46:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:04.856 06:46:08 -- common/autotest_common.sh@850 -- # return 0 00:20:04.856 06:46:08 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PQMv31NpnP 00:20:04.856 [2024-04-17 06:46:08.902350] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:04.856 [2024-04-17 06:46:08.902455] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:04.856 [2024-04-17 06:46:08.913710] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:04.856 [2024-04-17 06:46:08.914278] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9ee80 (107): Transport endpoint is not connected 00:20:04.856 [2024-04-17 06:46:08.915268] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9ee80 (9): Bad file descriptor 00:20:04.856 [2024-04-17 06:46:08.916267] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:04.856 [2024-04-17 06:46:08.916286] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:04.856 [2024-04-17 06:46:08.916299] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:04.856 request: 00:20:04.856 { 00:20:04.856 "name": "TLSTEST", 00:20:04.856 "trtype": "tcp", 00:20:04.856 "traddr": "10.0.0.2", 00:20:04.856 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:04.856 "adrfam": "ipv4", 00:20:04.856 "trsvcid": "4420", 00:20:04.856 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.856 "psk": "/tmp/tmp.PQMv31NpnP", 00:20:04.856 "method": "bdev_nvme_attach_controller", 00:20:04.856 "req_id": 1 00:20:04.856 } 00:20:04.856 Got JSON-RPC error response 00:20:04.856 response: 00:20:04.856 { 00:20:04.856 "code": -32602, 00:20:04.856 "message": "Invalid parameters" 00:20:04.856 } 00:20:04.856 06:46:08 -- target/tls.sh@36 -- # killprocess 12442 00:20:04.856 06:46:08 -- common/autotest_common.sh@936 -- # '[' -z 12442 ']' 00:20:04.856 06:46:08 -- common/autotest_common.sh@940 -- # kill -0 12442 00:20:04.856 06:46:08 -- common/autotest_common.sh@941 -- # uname 00:20:04.856 06:46:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:04.856 06:46:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 12442 00:20:04.856 06:46:08 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:04.856 06:46:08 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:04.856 06:46:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 12442' 00:20:04.856 killing process with pid 12442 00:20:04.856 06:46:08 -- common/autotest_common.sh@955 -- # kill 12442 00:20:04.856 Received shutdown signal, test time was about 10.000000 seconds 00:20:04.856 00:20:04.856 Latency(us) 00:20:04.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.856 =================================================================================================================== 00:20:04.856 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:04.856 [2024-04-17 06:46:08.964475] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:04.856 06:46:08 -- common/autotest_common.sh@960 -- # wait 12442 00:20:04.856 06:46:09 -- target/tls.sh@37 -- # return 1 00:20:04.856 06:46:09 -- common/autotest_common.sh@641 -- # es=1 00:20:04.856 06:46:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:04.856 06:46:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:04.856 06:46:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:04.856 06:46:09 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2pLuAhnjXI 00:20:04.856 06:46:09 -- common/autotest_common.sh@638 -- # local es=0 00:20:04.856 06:46:09 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2pLuAhnjXI 00:20:04.856 06:46:09 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:20:04.856 06:46:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:04.856 06:46:09 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:20:04.856 06:46:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:04.856 06:46:09 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2pLuAhnjXI 00:20:04.856 06:46:09 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:04.856 06:46:09 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:04.856 06:46:09 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:04.856 06:46:09 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2pLuAhnjXI' 00:20:04.856 06:46:09 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:04.856 06:46:09 -- target/tls.sh@28 -- # bdevperf_pid=12476 00:20:04.856 06:46:09 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:04.856 06:46:09 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:04.856 06:46:09 -- target/tls.sh@31 -- # waitforlisten 12476 /var/tmp/bdevperf.sock 00:20:04.856 06:46:09 -- common/autotest_common.sh@817 -- # '[' -z 12476 ']' 00:20:04.856 06:46:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.856 06:46:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:04.856 06:46:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.856 06:46:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:04.856 06:46:09 -- common/autotest_common.sh@10 -- # set +x 00:20:04.856 [2024-04-17 06:46:09.227655] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:20:04.856 [2024-04-17 06:46:09.227751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid12476 ] 00:20:04.856 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.856 [2024-04-17 06:46:09.291878] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.856 [2024-04-17 06:46:09.376174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.114 06:46:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:05.114 06:46:09 -- common/autotest_common.sh@850 -- # return 0 00:20:05.114 06:46:09 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.2pLuAhnjXI 00:20:05.372 [2024-04-17 06:46:09.723159] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:05.372 [2024-04-17 06:46:09.723328] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:05.372 [2024-04-17 06:46:09.728675] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:05.372 [2024-04-17 06:46:09.728708] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:05.372 [2024-04-17 06:46:09.728747] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:05.372 [2024-04-17 06:46:09.729235] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x120de80 (107): Transport endpoint is not connected 00:20:05.372 [2024-04-17 06:46:09.730223] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x120de80 (9): Bad file descriptor 00:20:05.372 [2024-04-17 06:46:09.731221] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:05.372 [2024-04-17 06:46:09.731243] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:05.372 [2024-04-17 06:46:09.731255] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:05.372 request: 00:20:05.372 { 00:20:05.372 "name": "TLSTEST", 00:20:05.372 "trtype": "tcp", 00:20:05.372 "traddr": "10.0.0.2", 00:20:05.372 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:05.372 "adrfam": "ipv4", 00:20:05.372 "trsvcid": "4420", 00:20:05.372 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.372 "psk": "/tmp/tmp.2pLuAhnjXI", 00:20:05.372 "method": "bdev_nvme_attach_controller", 00:20:05.372 "req_id": 1 00:20:05.372 } 00:20:05.372 Got JSON-RPC error response 00:20:05.372 response: 00:20:05.373 { 00:20:05.373 "code": -32602, 00:20:05.373 "message": "Invalid parameters" 00:20:05.373 } 00:20:05.373 06:46:09 -- target/tls.sh@36 -- # killprocess 12476 00:20:05.373 06:46:09 -- common/autotest_common.sh@936 -- # '[' -z 12476 ']' 00:20:05.373 06:46:09 -- common/autotest_common.sh@940 -- # kill -0 12476 00:20:05.373 06:46:09 -- common/autotest_common.sh@941 -- # uname 00:20:05.373 06:46:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:05.373 06:46:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 12476 00:20:05.373 06:46:09 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:05.373 06:46:09 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:05.373 06:46:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 12476' 00:20:05.373 killing process with pid 12476 00:20:05.373 06:46:09 -- common/autotest_common.sh@955 -- # kill 12476 00:20:05.373 Received shutdown signal, test time was about 10.000000 seconds 00:20:05.373 00:20:05.373 Latency(us) 00:20:05.373 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.373 =================================================================================================================== 00:20:05.373 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:05.373 [2024-04-17 06:46:09.783590] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:05.373 06:46:09 -- common/autotest_common.sh@960 -- # wait 12476 00:20:05.631 06:46:09 -- target/tls.sh@37 -- # return 1 00:20:05.631 06:46:09 -- common/autotest_common.sh@641 -- # es=1 00:20:05.631 06:46:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:05.631 06:46:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:05.631 06:46:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:05.631 06:46:09 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2pLuAhnjXI 00:20:05.631 06:46:09 -- common/autotest_common.sh@638 -- # local es=0 00:20:05.631 06:46:09 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2pLuAhnjXI 00:20:05.631 06:46:09 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:20:05.631 06:46:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:05.631 06:46:09 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:20:05.631 06:46:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:05.631 06:46:09 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2pLuAhnjXI 00:20:05.631 06:46:09 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:05.631 06:46:09 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:05.631 06:46:09 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:05.631 06:46:09 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2pLuAhnjXI' 00:20:05.631 06:46:09 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:05.631 06:46:09 -- target/tls.sh@28 -- # bdevperf_pid=12606 00:20:05.631 06:46:09 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:05.631 06:46:09 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:05.631 06:46:09 -- target/tls.sh@31 -- # waitforlisten 12606 /var/tmp/bdevperf.sock 00:20:05.631 06:46:09 -- common/autotest_common.sh@817 -- # '[' -z 12606 ']' 00:20:05.631 06:46:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:05.631 06:46:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:05.631 06:46:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:05.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:05.631 06:46:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:05.631 06:46:09 -- common/autotest_common.sh@10 -- # set +x 00:20:05.631 [2024-04-17 06:46:10.042949] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:20:05.631 [2024-04-17 06:46:10.043051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid12606 ] 00:20:05.631 EAL: No free 2048 kB hugepages reported on node 1 00:20:05.631 [2024-04-17 06:46:10.110311] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.631 [2024-04-17 06:46:10.201514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.889 06:46:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:05.889 06:46:10 -- common/autotest_common.sh@850 -- # return 0 00:20:05.889 06:46:10 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2pLuAhnjXI 00:20:06.147 [2024-04-17 06:46:10.544939] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:06.147 [2024-04-17 06:46:10.545053] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:06.147 [2024-04-17 06:46:10.550502] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:06.147 [2024-04-17 06:46:10.550535] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:06.147 [2024-04-17 06:46:10.550575] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:06.147 [2024-04-17 06:46:10.551015] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bece80 (107): Transport endpoint is not connected 00:20:06.147 [2024-04-17 06:46:10.552003] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bece80 (9): Bad file descriptor 00:20:06.147 [2024-04-17 06:46:10.553002] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:06.147 [2024-04-17 06:46:10.553023] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:06.147 [2024-04-17 06:46:10.553035] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:06.147 request: 00:20:06.147 { 00:20:06.147 "name": "TLSTEST", 00:20:06.147 "trtype": "tcp", 00:20:06.147 "traddr": "10.0.0.2", 00:20:06.147 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:06.147 "adrfam": "ipv4", 00:20:06.147 "trsvcid": "4420", 00:20:06.147 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:06.147 "psk": "/tmp/tmp.2pLuAhnjXI", 00:20:06.147 "method": "bdev_nvme_attach_controller", 00:20:06.147 "req_id": 1 00:20:06.147 } 00:20:06.147 Got JSON-RPC error response 00:20:06.147 response: 00:20:06.147 { 00:20:06.147 "code": -32602, 00:20:06.147 "message": "Invalid parameters" 00:20:06.147 } 00:20:06.147 06:46:10 -- target/tls.sh@36 -- # killprocess 12606 00:20:06.147 06:46:10 -- common/autotest_common.sh@936 -- # '[' -z 12606 ']' 00:20:06.147 06:46:10 -- common/autotest_common.sh@940 -- # kill -0 12606 00:20:06.147 06:46:10 -- common/autotest_common.sh@941 -- # uname 00:20:06.147 06:46:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:06.147 06:46:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 12606 00:20:06.147 06:46:10 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:06.147 06:46:10 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:06.147 06:46:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 12606' 00:20:06.147 killing process with pid 12606 00:20:06.147 06:46:10 -- common/autotest_common.sh@955 -- # kill 12606 00:20:06.147 Received shutdown signal, test time was about 10.000000 seconds 00:20:06.147 00:20:06.147 Latency(us) 00:20:06.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.147 =================================================================================================================== 00:20:06.147 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:06.147 [2024-04-17 06:46:10.606957] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:06.147 06:46:10 -- common/autotest_common.sh@960 -- # wait 12606 00:20:06.405 06:46:10 -- target/tls.sh@37 -- # return 1 00:20:06.405 06:46:10 -- common/autotest_common.sh@641 -- # es=1 00:20:06.405 06:46:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:06.405 06:46:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:06.405 06:46:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:06.405 06:46:10 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:06.405 06:46:10 -- common/autotest_common.sh@638 -- # local es=0 00:20:06.405 06:46:10 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:06.405 06:46:10 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:20:06.405 06:46:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:06.405 06:46:10 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:20:06.405 06:46:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:06.405 06:46:10 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:06.405 06:46:10 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:06.405 06:46:10 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:06.405 06:46:10 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:06.405 06:46:10 -- target/tls.sh@23 -- # psk= 00:20:06.405 06:46:10 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:06.405 06:46:10 -- target/tls.sh@28 -- # bdevperf_pid=12734 00:20:06.405 06:46:10 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:06.405 06:46:10 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:06.405 06:46:10 -- target/tls.sh@31 -- # waitforlisten 12734 /var/tmp/bdevperf.sock 00:20:06.405 06:46:10 -- common/autotest_common.sh@817 -- # '[' -z 12734 ']' 00:20:06.405 06:46:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:06.405 06:46:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:06.405 06:46:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:06.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:06.405 06:46:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:06.405 06:46:10 -- common/autotest_common.sh@10 -- # set +x 00:20:06.405 [2024-04-17 06:46:10.872359] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:20:06.405 [2024-04-17 06:46:10.872454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid12734 ] 00:20:06.405 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.405 [2024-04-17 06:46:10.931697] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.663 [2024-04-17 06:46:11.016472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:06.663 06:46:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:06.663 06:46:11 -- common/autotest_common.sh@850 -- # return 0 00:20:06.663 06:46:11 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:06.921 [2024-04-17 06:46:11.339995] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:06.921 [2024-04-17 06:46:11.341967] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1705570 (9): Bad file descriptor 00:20:06.921 [2024-04-17 06:46:11.342962] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:06.921 [2024-04-17 06:46:11.342983] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:06.921 [2024-04-17 06:46:11.342995] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:06.921 request: 00:20:06.921 { 00:20:06.921 "name": "TLSTEST", 00:20:06.921 "trtype": "tcp", 00:20:06.921 "traddr": "10.0.0.2", 00:20:06.921 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:06.921 "adrfam": "ipv4", 00:20:06.921 "trsvcid": "4420", 00:20:06.921 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.921 "method": "bdev_nvme_attach_controller", 00:20:06.921 "req_id": 1 00:20:06.921 } 00:20:06.921 Got JSON-RPC error response 00:20:06.921 response: 00:20:06.921 { 00:20:06.921 "code": -32602, 00:20:06.921 "message": "Invalid parameters" 00:20:06.921 } 00:20:06.921 06:46:11 -- target/tls.sh@36 -- # killprocess 12734 00:20:06.921 06:46:11 -- common/autotest_common.sh@936 -- # '[' -z 12734 ']' 00:20:06.921 06:46:11 -- common/autotest_common.sh@940 -- # kill -0 12734 00:20:06.921 06:46:11 -- common/autotest_common.sh@941 -- # uname 00:20:06.921 06:46:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:06.921 06:46:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 12734 00:20:06.921 06:46:11 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:06.921 06:46:11 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:06.921 06:46:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 12734' 00:20:06.921 killing process with pid 12734 00:20:06.921 06:46:11 -- common/autotest_common.sh@955 -- # kill 12734 00:20:06.921 Received shutdown signal, test time was about 10.000000 seconds 00:20:06.921 00:20:06.921 Latency(us) 00:20:06.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.921 =================================================================================================================== 00:20:06.921 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:06.921 06:46:11 -- common/autotest_common.sh@960 -- # wait 12734 00:20:07.180 06:46:11 -- target/tls.sh@37 -- # return 1 00:20:07.180 06:46:11 -- common/autotest_common.sh@641 -- # es=1 00:20:07.180 06:46:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:07.180 06:46:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:07.180 06:46:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:07.180 06:46:11 -- target/tls.sh@158 -- # killprocess 9308 00:20:07.180 06:46:11 -- common/autotest_common.sh@936 -- # '[' -z 9308 ']' 00:20:07.180 06:46:11 -- common/autotest_common.sh@940 -- # kill -0 9308 00:20:07.180 06:46:11 -- common/autotest_common.sh@941 -- # uname 00:20:07.180 06:46:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:07.180 06:46:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 9308 00:20:07.180 06:46:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:07.180 06:46:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:07.180 06:46:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 9308' 00:20:07.180 killing process with pid 9308 00:20:07.180 06:46:11 -- common/autotest_common.sh@955 -- # kill 9308 00:20:07.180 [2024-04-17 06:46:11.632331] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:07.180 06:46:11 -- common/autotest_common.sh@960 -- # wait 9308 00:20:07.438 06:46:11 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:07.438 06:46:11 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:07.438 06:46:11 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:07.438 06:46:11 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:20:07.438 06:46:11 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:07.438 06:46:11 -- nvmf/common.sh@693 -- # digest=2 00:20:07.438 06:46:11 -- nvmf/common.sh@694 -- # python - 00:20:07.438 06:46:11 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:07.438 06:46:11 -- target/tls.sh@160 -- # mktemp 00:20:07.438 06:46:11 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.9Mng7s5y3P 00:20:07.438 06:46:11 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:07.438 06:46:11 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.9Mng7s5y3P 00:20:07.438 06:46:11 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:07.438 06:46:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:07.438 06:46:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:07.438 06:46:11 -- common/autotest_common.sh@10 -- # set +x 00:20:07.438 06:46:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:07.438 06:46:11 -- nvmf/common.sh@470 -- # nvmfpid=12886 00:20:07.438 06:46:11 -- nvmf/common.sh@471 -- # waitforlisten 12886 00:20:07.438 06:46:11 -- common/autotest_common.sh@817 -- # '[' -z 12886 ']' 00:20:07.438 06:46:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.438 06:46:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:07.438 06:46:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.439 06:46:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:07.439 06:46:11 -- common/autotest_common.sh@10 -- # set +x 00:20:07.439 [2024-04-17 06:46:11.990913] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:20:07.439 [2024-04-17 06:46:11.990989] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.439 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.698 [2024-04-17 06:46:12.060069] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.698 [2024-04-17 06:46:12.152171] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.698 [2024-04-17 06:46:12.152260] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.698 [2024-04-17 06:46:12.152274] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.698 [2024-04-17 06:46:12.152285] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.698 [2024-04-17 06:46:12.152294] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.698 [2024-04-17 06:46:12.152327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.698 06:46:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:07.698 06:46:12 -- common/autotest_common.sh@850 -- # return 0 00:20:07.698 06:46:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:07.698 06:46:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:07.698 06:46:12 -- common/autotest_common.sh@10 -- # set +x 00:20:07.698 06:46:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.698 06:46:12 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.9Mng7s5y3P 00:20:07.698 06:46:12 -- target/tls.sh@49 -- # local key=/tmp/tmp.9Mng7s5y3P 00:20:07.698 06:46:12 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:08.025 [2024-04-17 06:46:12.545455] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.025 06:46:12 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:08.282 06:46:12 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:08.540 [2024-04-17 06:46:13.030782] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:08.540 [2024-04-17 06:46:13.031005] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:08.540 06:46:13 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:08.797 malloc0 00:20:08.797 06:46:13 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:09.054 06:46:13 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9Mng7s5y3P 00:20:09.311 [2024-04-17 06:46:13.736338] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:09.311 06:46:13 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9Mng7s5y3P 00:20:09.311 06:46:13 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:09.311 06:46:13 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:09.311 06:46:13 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:09.311 06:46:13 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9Mng7s5y3P' 00:20:09.311 06:46:13 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:09.311 06:46:13 -- target/tls.sh@28 -- # bdevperf_pid=13111 00:20:09.311 06:46:13 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:09.311 06:46:13 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:09.311 06:46:13 -- target/tls.sh@31 -- # waitforlisten 13111 /var/tmp/bdevperf.sock 00:20:09.311 06:46:13 -- common/autotest_common.sh@817 -- # '[' -z 13111 ']' 00:20:09.311 06:46:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:09.311 06:46:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:09.311 06:46:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:09.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:09.311 06:46:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:09.311 06:46:13 -- common/autotest_common.sh@10 -- # set +x 00:20:09.311 [2024-04-17 06:46:13.799867] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:20:09.311 [2024-04-17 06:46:13.799957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid13111 ] 00:20:09.311 EAL: No free 2048 kB hugepages reported on node 1 00:20:09.311 [2024-04-17 06:46:13.858714] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.568 [2024-04-17 06:46:13.940906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:09.568 06:46:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:09.568 06:46:14 -- common/autotest_common.sh@850 -- # return 0 00:20:09.569 06:46:14 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9Mng7s5y3P 00:20:09.825 [2024-04-17 06:46:14.258673] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:09.825 [2024-04-17 06:46:14.258787] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:09.825 TLSTESTn1 00:20:09.825 06:46:14 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:10.083 Running I/O for 10 seconds... 00:20:20.044 00:20:20.044 Latency(us) 00:20:20.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.044 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:20.044 Verification LBA range: start 0x0 length 0x2000 00:20:20.044 TLSTESTn1 : 10.04 2338.33 9.13 0.00 0.00 54609.22 8932.31 88934.78 00:20:20.044 =================================================================================================================== 00:20:20.044 Total : 2338.33 9.13 0.00 0.00 54609.22 8932.31 88934.78 00:20:20.044 0 00:20:20.044 06:46:24 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:20.044 06:46:24 -- target/tls.sh@45 -- # killprocess 13111 00:20:20.044 06:46:24 -- common/autotest_common.sh@936 -- # '[' -z 13111 ']' 00:20:20.044 06:46:24 -- common/autotest_common.sh@940 -- # kill -0 13111 00:20:20.044 06:46:24 -- common/autotest_common.sh@941 -- # uname 00:20:20.044 06:46:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:20.044 06:46:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 13111 00:20:20.044 06:46:24 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:20.044 06:46:24 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:20.044 06:46:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 13111' 00:20:20.044 killing process with pid 13111 00:20:20.044 06:46:24 -- common/autotest_common.sh@955 -- # kill 13111 00:20:20.044 Received shutdown signal, test time was about 10.000000 seconds 00:20:20.044 00:20:20.044 Latency(us) 00:20:20.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.044 =================================================================================================================== 00:20:20.044 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:20.044 [2024-04-17 06:46:24.550716] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:20.044 06:46:24 -- common/autotest_common.sh@960 -- # wait 13111 00:20:20.302 06:46:24 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.9Mng7s5y3P 00:20:20.302 06:46:24 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9Mng7s5y3P 00:20:20.302 06:46:24 -- common/autotest_common.sh@638 -- # local es=0 00:20:20.302 06:46:24 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9Mng7s5y3P 00:20:20.302 06:46:24 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:20:20.302 06:46:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:20.302 06:46:24 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:20:20.302 06:46:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:20.302 06:46:24 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9Mng7s5y3P 00:20:20.302 06:46:24 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:20.302 06:46:24 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:20.302 06:46:24 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:20.302 06:46:24 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9Mng7s5y3P' 00:20:20.302 06:46:24 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:20.302 06:46:24 -- target/tls.sh@28 -- # bdevperf_pid=14367 00:20:20.302 06:46:24 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:20.302 06:46:24 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:20.302 06:46:24 -- target/tls.sh@31 -- # waitforlisten 14367 /var/tmp/bdevperf.sock 00:20:20.302 06:46:24 -- common/autotest_common.sh@817 -- # '[' -z 14367 ']' 00:20:20.302 06:46:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:20.302 06:46:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:20.302 06:46:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:20.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:20.302 06:46:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:20.302 06:46:24 -- common/autotest_common.sh@10 -- # set +x 00:20:20.302 [2024-04-17 06:46:24.810062] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:20:20.302 [2024-04-17 06:46:24.810152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid14367 ] 00:20:20.302 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.302 [2024-04-17 06:46:24.872622] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.560 [2024-04-17 06:46:24.955772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:20.560 06:46:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:20.560 06:46:25 -- common/autotest_common.sh@850 -- # return 0 00:20:20.560 06:46:25 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9Mng7s5y3P 00:20:20.818 [2024-04-17 06:46:25.295151] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:20.818 [2024-04-17 06:46:25.295234] bdev_nvme.c:6054:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:20.818 [2024-04-17 06:46:25.295248] bdev_nvme.c:6163:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.9Mng7s5y3P 00:20:20.818 request: 00:20:20.818 { 00:20:20.818 "name": "TLSTEST", 00:20:20.818 "trtype": "tcp", 00:20:20.818 "traddr": "10.0.0.2", 00:20:20.818 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:20.818 "adrfam": "ipv4", 00:20:20.818 "trsvcid": "4420", 00:20:20.818 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.818 "psk": "/tmp/tmp.9Mng7s5y3P", 00:20:20.818 "method": "bdev_nvme_attach_controller", 00:20:20.818 "req_id": 1 00:20:20.818 } 00:20:20.818 Got JSON-RPC error response 00:20:20.818 response: 00:20:20.818 { 00:20:20.818 "code": -1, 00:20:20.818 "message": "Operation not permitted" 00:20:20.818 } 00:20:20.818 06:46:25 -- target/tls.sh@36 -- # killprocess 14367 00:20:20.818 06:46:25 -- common/autotest_common.sh@936 -- # '[' -z 14367 ']' 00:20:20.818 06:46:25 -- common/autotest_common.sh@940 -- # kill -0 14367 00:20:20.818 06:46:25 -- common/autotest_common.sh@941 -- # uname 00:20:20.818 06:46:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:20.818 06:46:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 14367 00:20:20.818 06:46:25 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:20.818 06:46:25 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:20.818 06:46:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 14367' 00:20:20.818 killing process with pid 14367 00:20:20.818 06:46:25 -- common/autotest_common.sh@955 -- # kill 14367 00:20:20.818 Received shutdown signal, test time was about 10.000000 seconds 00:20:20.818 00:20:20.818 Latency(us) 00:20:20.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.818 =================================================================================================================== 00:20:20.818 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:20.818 06:46:25 -- common/autotest_common.sh@960 -- # wait 14367 00:20:21.076 06:46:25 -- target/tls.sh@37 -- # return 1 00:20:21.076 06:46:25 -- common/autotest_common.sh@641 -- # es=1 00:20:21.076 06:46:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:21.076 06:46:25 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:21.076 06:46:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:21.076 06:46:25 -- target/tls.sh@174 -- # killprocess 12886 00:20:21.076 06:46:25 -- common/autotest_common.sh@936 -- # '[' -z 12886 ']' 00:20:21.076 06:46:25 -- common/autotest_common.sh@940 -- # kill -0 12886 00:20:21.076 06:46:25 -- common/autotest_common.sh@941 -- # uname 00:20:21.076 06:46:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:21.076 06:46:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 12886 00:20:21.076 06:46:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:21.076 06:46:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:21.076 06:46:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 12886' 00:20:21.076 killing process with pid 12886 00:20:21.076 06:46:25 -- common/autotest_common.sh@955 -- # kill 12886 00:20:21.076 [2024-04-17 06:46:25.590439] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:21.076 06:46:25 -- common/autotest_common.sh@960 -- # wait 12886 00:20:21.334 06:46:25 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:21.334 06:46:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:21.334 06:46:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:21.334 06:46:25 -- common/autotest_common.sh@10 -- # set +x 00:20:21.334 06:46:25 -- nvmf/common.sh@470 -- # nvmfpid=14512 00:20:21.334 06:46:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:21.334 06:46:25 -- nvmf/common.sh@471 -- # waitforlisten 14512 00:20:21.334 06:46:25 -- common/autotest_common.sh@817 -- # '[' -z 14512 ']' 00:20:21.334 06:46:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.334 06:46:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:21.334 06:46:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.334 06:46:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:21.334 06:46:25 -- common/autotest_common.sh@10 -- # set +x 00:20:21.334 [2024-04-17 06:46:25.886941] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:20:21.334 [2024-04-17 06:46:25.887041] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.334 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.591 [2024-04-17 06:46:25.951996] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.591 [2024-04-17 06:46:26.036999] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:21.591 [2024-04-17 06:46:26.037054] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:21.591 [2024-04-17 06:46:26.037078] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:21.591 [2024-04-17 06:46:26.037089] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:21.591 [2024-04-17 06:46:26.037099] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:21.591 [2024-04-17 06:46:26.037134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.591 06:46:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:21.591 06:46:26 -- common/autotest_common.sh@850 -- # return 0 00:20:21.591 06:46:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:21.591 06:46:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:21.591 06:46:26 -- common/autotest_common.sh@10 -- # set +x 00:20:21.591 06:46:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:21.591 06:46:26 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.9Mng7s5y3P 00:20:21.591 06:46:26 -- common/autotest_common.sh@638 -- # local es=0 00:20:21.591 06:46:26 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.9Mng7s5y3P 00:20:21.591 06:46:26 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:20:21.591 06:46:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:21.591 06:46:26 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:20:21.591 06:46:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:21.591 06:46:26 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.9Mng7s5y3P 00:20:21.591 06:46:26 -- target/tls.sh@49 -- # local key=/tmp/tmp.9Mng7s5y3P 00:20:21.591 06:46:26 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:21.849 [2024-04-17 06:46:26.407876] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.849 06:46:26 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:22.107 06:46:26 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:22.364 [2024-04-17 06:46:26.905161] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:22.364 [2024-04-17 06:46:26.905417] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:22.364 06:46:26 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:22.622 malloc0 00:20:22.622 06:46:27 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:22.880 06:46:27 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9Mng7s5y3P 00:20:23.138 [2024-04-17 06:46:27.671698] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:23.138 [2024-04-17 06:46:27.671741] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:23.138 [2024-04-17 06:46:27.671775] subsystem.c: 967:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:20:23.138 request: 00:20:23.138 { 00:20:23.138 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.138 "host": "nqn.2016-06.io.spdk:host1", 00:20:23.138 "psk": "/tmp/tmp.9Mng7s5y3P", 00:20:23.138 "method": "nvmf_subsystem_add_host", 00:20:23.138 "req_id": 1 00:20:23.138 } 00:20:23.138 Got JSON-RPC error response 00:20:23.138 response: 00:20:23.139 { 00:20:23.139 "code": -32603, 00:20:23.139 "message": "Internal error" 00:20:23.139 } 00:20:23.139 06:46:27 -- common/autotest_common.sh@641 -- # es=1 00:20:23.139 06:46:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:23.139 06:46:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:23.139 06:46:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:23.139 06:46:27 -- target/tls.sh@180 -- # killprocess 14512 00:20:23.139 06:46:27 -- common/autotest_common.sh@936 -- # '[' -z 14512 ']' 00:20:23.139 06:46:27 -- common/autotest_common.sh@940 -- # kill -0 14512 00:20:23.139 06:46:27 -- common/autotest_common.sh@941 -- # uname 00:20:23.139 06:46:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:23.139 06:46:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 14512 00:20:23.139 06:46:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:23.139 06:46:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:23.139 06:46:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 14512' 00:20:23.139 killing process with pid 14512 00:20:23.139 06:46:27 -- common/autotest_common.sh@955 -- # kill 14512 00:20:23.139 06:46:27 -- common/autotest_common.sh@960 -- # wait 14512 00:20:23.396 06:46:27 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.9Mng7s5y3P 00:20:23.396 06:46:27 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:23.396 06:46:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:23.396 06:46:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:23.396 06:46:27 -- common/autotest_common.sh@10 -- # set +x 00:20:23.396 06:46:27 -- nvmf/common.sh@470 -- # nvmfpid=14804 00:20:23.396 06:46:27 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:23.396 06:46:27 -- nvmf/common.sh@471 -- # waitforlisten 14804 00:20:23.396 06:46:27 -- common/autotest_common.sh@817 -- # '[' -z 14804 ']' 00:20:23.396 06:46:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.396 06:46:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:23.396 06:46:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.396 06:46:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:23.396 06:46:27 -- common/autotest_common.sh@10 -- # set +x 00:20:23.654 [2024-04-17 06:46:28.021880] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:20:23.654 [2024-04-17 06:46:28.021970] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.654 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.654 [2024-04-17 06:46:28.085525] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.654 [2024-04-17 06:46:28.174223] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:23.654 [2024-04-17 06:46:28.174294] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:23.654 [2024-04-17 06:46:28.174321] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:23.654 [2024-04-17 06:46:28.174334] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:23.654 [2024-04-17 06:46:28.174346] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:23.654 [2024-04-17 06:46:28.174378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.912 06:46:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:23.912 06:46:28 -- common/autotest_common.sh@850 -- # return 0 00:20:23.912 06:46:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:23.912 06:46:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:23.912 06:46:28 -- common/autotest_common.sh@10 -- # set +x 00:20:23.912 06:46:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.912 06:46:28 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.9Mng7s5y3P 00:20:23.912 06:46:28 -- target/tls.sh@49 -- # local key=/tmp/tmp.9Mng7s5y3P 00:20:23.912 06:46:28 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:24.170 [2024-04-17 06:46:28.546237] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.170 06:46:28 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:24.429 06:46:28 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:24.686 [2024-04-17 06:46:29.055572] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:24.686 [2024-04-17 06:46:29.055817] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:24.686 06:46:29 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:24.944 malloc0 00:20:24.944 06:46:29 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:25.201 06:46:29 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9Mng7s5y3P 00:20:25.201 [2024-04-17 06:46:29.797114] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:25.460 06:46:29 -- target/tls.sh@188 -- # bdevperf_pid=15087 00:20:25.460 06:46:29 -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:25.460 06:46:29 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:25.460 06:46:29 -- target/tls.sh@191 -- # waitforlisten 15087 /var/tmp/bdevperf.sock 00:20:25.460 06:46:29 -- common/autotest_common.sh@817 -- # '[' -z 15087 ']' 00:20:25.460 06:46:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.460 06:46:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:25.460 06:46:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.460 06:46:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:25.460 06:46:29 -- common/autotest_common.sh@10 -- # set +x 00:20:25.460 [2024-04-17 06:46:29.853764] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:20:25.460 [2024-04-17 06:46:29.853850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid15087 ] 00:20:25.460 EAL: No free 2048 kB hugepages reported on node 1 00:20:25.460 [2024-04-17 06:46:29.911807] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.460 [2024-04-17 06:46:29.992729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.717 06:46:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:25.717 06:46:30 -- common/autotest_common.sh@850 -- # return 0 00:20:25.717 06:46:30 -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9Mng7s5y3P 00:20:25.974 [2024-04-17 06:46:30.327055] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:25.974 [2024-04-17 06:46:30.327167] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:25.974 TLSTESTn1 00:20:25.974 06:46:30 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:26.232 06:46:30 -- target/tls.sh@196 -- # tgtconf='{ 00:20:26.232 "subsystems": [ 00:20:26.232 { 00:20:26.232 "subsystem": "keyring", 00:20:26.232 "config": [] 00:20:26.232 }, 00:20:26.232 { 00:20:26.232 "subsystem": "iobuf", 00:20:26.232 "config": [ 00:20:26.232 { 00:20:26.232 "method": "iobuf_set_options", 00:20:26.232 "params": { 00:20:26.232 "small_pool_count": 8192, 00:20:26.232 "large_pool_count": 1024, 00:20:26.232 "small_bufsize": 8192, 00:20:26.232 "large_bufsize": 135168 00:20:26.232 } 00:20:26.232 } 00:20:26.232 ] 00:20:26.232 }, 00:20:26.232 { 00:20:26.232 "subsystem": "sock", 00:20:26.232 "config": [ 00:20:26.232 { 00:20:26.232 "method": "sock_impl_set_options", 00:20:26.232 "params": { 00:20:26.232 "impl_name": "posix", 00:20:26.232 "recv_buf_size": 2097152, 00:20:26.232 "send_buf_size": 2097152, 00:20:26.232 "enable_recv_pipe": true, 00:20:26.232 "enable_quickack": false, 00:20:26.232 "enable_placement_id": 0, 00:20:26.232 "enable_zerocopy_send_server": true, 00:20:26.232 "enable_zerocopy_send_client": false, 00:20:26.232 "zerocopy_threshold": 0, 00:20:26.232 "tls_version": 0, 00:20:26.232 "enable_ktls": false 00:20:26.232 } 00:20:26.232 }, 00:20:26.232 { 00:20:26.232 "method": "sock_impl_set_options", 00:20:26.232 "params": { 00:20:26.232 "impl_name": "ssl", 00:20:26.232 "recv_buf_size": 4096, 00:20:26.232 "send_buf_size": 4096, 00:20:26.232 "enable_recv_pipe": true, 00:20:26.232 "enable_quickack": false, 00:20:26.232 "enable_placement_id": 0, 00:20:26.232 "enable_zerocopy_send_server": true, 00:20:26.232 "enable_zerocopy_send_client": false, 00:20:26.232 "zerocopy_threshold": 0, 00:20:26.232 "tls_version": 0, 00:20:26.232 "enable_ktls": false 00:20:26.232 } 00:20:26.232 } 00:20:26.232 ] 00:20:26.232 }, 00:20:26.232 { 00:20:26.232 "subsystem": "vmd", 00:20:26.232 "config": [] 00:20:26.232 }, 00:20:26.232 { 00:20:26.232 "subsystem": "accel", 00:20:26.232 "config": [ 00:20:26.232 { 00:20:26.232 "method": "accel_set_options", 00:20:26.232 "params": { 00:20:26.232 "small_cache_size": 128, 00:20:26.232 "large_cache_size": 16, 00:20:26.232 "task_count": 2048, 00:20:26.232 "sequence_count": 2048, 00:20:26.232 "buf_count": 2048 00:20:26.232 } 00:20:26.232 } 00:20:26.232 ] 00:20:26.232 }, 00:20:26.232 { 00:20:26.232 "subsystem": "bdev", 00:20:26.232 "config": [ 00:20:26.232 { 00:20:26.232 "method": "bdev_set_options", 00:20:26.232 "params": { 00:20:26.232 "bdev_io_pool_size": 65535, 00:20:26.232 "bdev_io_cache_size": 256, 00:20:26.232 "bdev_auto_examine": true, 00:20:26.232 "iobuf_small_cache_size": 128, 00:20:26.232 "iobuf_large_cache_size": 16 00:20:26.232 } 00:20:26.232 }, 00:20:26.232 { 00:20:26.232 "method": "bdev_raid_set_options", 00:20:26.232 "params": { 00:20:26.232 "process_window_size_kb": 1024 00:20:26.232 } 00:20:26.232 }, 00:20:26.232 { 00:20:26.232 "method": "bdev_iscsi_set_options", 00:20:26.232 "params": { 00:20:26.232 "timeout_sec": 30 00:20:26.232 } 00:20:26.232 }, 00:20:26.232 { 00:20:26.232 "method": "bdev_nvme_set_options", 00:20:26.232 "params": { 00:20:26.232 "action_on_timeout": "none", 00:20:26.232 "timeout_us": 0, 00:20:26.232 "timeout_admin_us": 0, 00:20:26.232 "keep_alive_timeout_ms": 10000, 00:20:26.232 "arbitration_burst": 0, 00:20:26.232 "low_priority_weight": 0, 00:20:26.232 "medium_priority_weight": 0, 00:20:26.232 "high_priority_weight": 0, 00:20:26.232 "nvme_adminq_poll_period_us": 10000, 00:20:26.232 "nvme_ioq_poll_period_us": 0, 00:20:26.232 "io_queue_requests": 0, 00:20:26.232 "delay_cmd_submit": true, 00:20:26.232 "transport_retry_count": 4, 00:20:26.232 "bdev_retry_count": 3, 00:20:26.232 "transport_ack_timeout": 0, 00:20:26.232 "ctrlr_loss_timeout_sec": 0, 00:20:26.232 "reconnect_delay_sec": 0, 00:20:26.232 "fast_io_fail_timeout_sec": 0, 00:20:26.232 "disable_auto_failback": false, 00:20:26.232 "generate_uuids": false, 00:20:26.232 "transport_tos": 0, 00:20:26.232 "nvme_error_stat": false, 00:20:26.232 "rdma_srq_size": 0, 00:20:26.232 "io_path_stat": false, 00:20:26.232 "allow_accel_sequence": false, 00:20:26.232 "rdma_max_cq_size": 0, 00:20:26.232 "rdma_cm_event_timeout_ms": 0, 00:20:26.232 "dhchap_digests": [ 00:20:26.232 "sha256", 00:20:26.232 "sha384", 00:20:26.232 "sha512" 00:20:26.232 ], 00:20:26.232 "dhchap_dhgroups": [ 00:20:26.232 "null", 00:20:26.232 "ffdhe2048", 00:20:26.232 "ffdhe3072", 00:20:26.232 "ffdhe4096", 00:20:26.232 "ffdhe6144", 00:20:26.232 "ffdhe8192" 00:20:26.232 ] 00:20:26.232 } 00:20:26.232 }, 00:20:26.232 { 00:20:26.232 "method": "bdev_nvme_set_hotplug", 00:20:26.232 "params": { 00:20:26.232 "period_us": 100000, 00:20:26.232 "enable": false 00:20:26.232 } 00:20:26.232 }, 00:20:26.232 { 00:20:26.232 "method": "bdev_malloc_create", 00:20:26.232 "params": { 00:20:26.232 "name": "malloc0", 00:20:26.232 "num_blocks": 8192, 00:20:26.232 "block_size": 4096, 00:20:26.232 "physical_block_size": 4096, 00:20:26.232 "uuid": "6a808d1f-3921-41a1-9d06-56eea1462ab5", 00:20:26.232 "optimal_io_boundary": 0 00:20:26.232 } 00:20:26.232 }, 00:20:26.232 { 00:20:26.232 "method": "bdev_wait_for_examine" 00:20:26.232 } 00:20:26.232 ] 00:20:26.232 }, 00:20:26.232 { 00:20:26.232 "subsystem": "nbd", 00:20:26.232 "config": [] 00:20:26.232 }, 00:20:26.232 { 00:20:26.233 "subsystem": "scheduler", 00:20:26.233 "config": [ 00:20:26.233 { 00:20:26.233 "method": "framework_set_scheduler", 00:20:26.233 "params": { 00:20:26.233 "name": "static" 00:20:26.233 } 00:20:26.233 } 00:20:26.233 ] 00:20:26.233 }, 00:20:26.233 { 00:20:26.233 "subsystem": "nvmf", 00:20:26.233 "config": [ 00:20:26.233 { 00:20:26.233 "method": "nvmf_set_config", 00:20:26.233 "params": { 00:20:26.233 "discovery_filter": "match_any", 00:20:26.233 "admin_cmd_passthru": { 00:20:26.233 "identify_ctrlr": false 00:20:26.233 } 00:20:26.233 } 00:20:26.233 }, 00:20:26.233 { 00:20:26.233 "method": "nvmf_set_max_subsystems", 00:20:26.233 "params": { 00:20:26.233 "max_subsystems": 1024 00:20:26.233 } 00:20:26.233 }, 00:20:26.233 { 00:20:26.233 "method": "nvmf_set_crdt", 00:20:26.233 "params": { 00:20:26.233 "crdt1": 0, 00:20:26.233 "crdt2": 0, 00:20:26.233 "crdt3": 0 00:20:26.233 } 00:20:26.233 }, 00:20:26.233 { 00:20:26.233 "method": "nvmf_create_transport", 00:20:26.233 "params": { 00:20:26.233 "trtype": "TCP", 00:20:26.233 "max_queue_depth": 128, 00:20:26.233 "max_io_qpairs_per_ctrlr": 127, 00:20:26.233 "in_capsule_data_size": 4096, 00:20:26.233 "max_io_size": 131072, 00:20:26.233 "io_unit_size": 131072, 00:20:26.233 "max_aq_depth": 128, 00:20:26.233 "num_shared_buffers": 511, 00:20:26.233 "buf_cache_size": 4294967295, 00:20:26.233 "dif_insert_or_strip": false, 00:20:26.233 "zcopy": false, 00:20:26.233 "c2h_success": false, 00:20:26.233 "sock_priority": 0, 00:20:26.233 "abort_timeout_sec": 1, 00:20:26.233 "ack_timeout": 0 00:20:26.233 } 00:20:26.233 }, 00:20:26.233 { 00:20:26.233 "method": "nvmf_create_subsystem", 00:20:26.233 "params": { 00:20:26.233 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.233 "allow_any_host": false, 00:20:26.233 "serial_number": "SPDK00000000000001", 00:20:26.233 "model_number": "SPDK bdev Controller", 00:20:26.233 "max_namespaces": 10, 00:20:26.233 "min_cntlid": 1, 00:20:26.233 "max_cntlid": 65519, 00:20:26.233 "ana_reporting": false 00:20:26.233 } 00:20:26.233 }, 00:20:26.233 { 00:20:26.233 "method": "nvmf_subsystem_add_host", 00:20:26.233 "params": { 00:20:26.233 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.233 "host": "nqn.2016-06.io.spdk:host1", 00:20:26.233 "psk": "/tmp/tmp.9Mng7s5y3P" 00:20:26.233 } 00:20:26.233 }, 00:20:26.233 { 00:20:26.233 "method": "nvmf_subsystem_add_ns", 00:20:26.233 "params": { 00:20:26.233 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.233 "namespace": { 00:20:26.233 "nsid": 1, 00:20:26.233 "bdev_name": "malloc0", 00:20:26.233 "nguid": "6A808D1F392141A19D0656EEA1462AB5", 00:20:26.233 "uuid": "6a808d1f-3921-41a1-9d06-56eea1462ab5", 00:20:26.233 "no_auto_visible": false 00:20:26.233 } 00:20:26.233 } 00:20:26.233 }, 00:20:26.233 { 00:20:26.233 "method": "nvmf_subsystem_add_listener", 00:20:26.233 "params": { 00:20:26.233 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.233 "listen_address": { 00:20:26.233 "trtype": "TCP", 00:20:26.233 "adrfam": "IPv4", 00:20:26.233 "traddr": "10.0.0.2", 00:20:26.233 "trsvcid": "4420" 00:20:26.233 }, 00:20:26.233 "secure_channel": true 00:20:26.233 } 00:20:26.233 } 00:20:26.233 ] 00:20:26.233 } 00:20:26.233 ] 00:20:26.233 }' 00:20:26.233 06:46:30 -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:26.491 06:46:31 -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:26.491 "subsystems": [ 00:20:26.491 { 00:20:26.491 "subsystem": "keyring", 00:20:26.491 "config": [] 00:20:26.491 }, 00:20:26.491 { 00:20:26.491 "subsystem": "iobuf", 00:20:26.491 "config": [ 00:20:26.491 { 00:20:26.491 "method": "iobuf_set_options", 00:20:26.491 "params": { 00:20:26.491 "small_pool_count": 8192, 00:20:26.491 "large_pool_count": 1024, 00:20:26.491 "small_bufsize": 8192, 00:20:26.491 "large_bufsize": 135168 00:20:26.491 } 00:20:26.491 } 00:20:26.491 ] 00:20:26.491 }, 00:20:26.491 { 00:20:26.491 "subsystem": "sock", 00:20:26.491 "config": [ 00:20:26.491 { 00:20:26.491 "method": "sock_impl_set_options", 00:20:26.491 "params": { 00:20:26.491 "impl_name": "posix", 00:20:26.491 "recv_buf_size": 2097152, 00:20:26.491 "send_buf_size": 2097152, 00:20:26.491 "enable_recv_pipe": true, 00:20:26.491 "enable_quickack": false, 00:20:26.491 "enable_placement_id": 0, 00:20:26.491 "enable_zerocopy_send_server": true, 00:20:26.491 "enable_zerocopy_send_client": false, 00:20:26.491 "zerocopy_threshold": 0, 00:20:26.491 "tls_version": 0, 00:20:26.491 "enable_ktls": false 00:20:26.491 } 00:20:26.491 }, 00:20:26.491 { 00:20:26.491 "method": "sock_impl_set_options", 00:20:26.491 "params": { 00:20:26.491 "impl_name": "ssl", 00:20:26.491 "recv_buf_size": 4096, 00:20:26.491 "send_buf_size": 4096, 00:20:26.491 "enable_recv_pipe": true, 00:20:26.491 "enable_quickack": false, 00:20:26.491 "enable_placement_id": 0, 00:20:26.491 "enable_zerocopy_send_server": true, 00:20:26.491 "enable_zerocopy_send_client": false, 00:20:26.491 "zerocopy_threshold": 0, 00:20:26.491 "tls_version": 0, 00:20:26.491 "enable_ktls": false 00:20:26.491 } 00:20:26.491 } 00:20:26.491 ] 00:20:26.491 }, 00:20:26.491 { 00:20:26.491 "subsystem": "vmd", 00:20:26.491 "config": [] 00:20:26.491 }, 00:20:26.491 { 00:20:26.491 "subsystem": "accel", 00:20:26.491 "config": [ 00:20:26.491 { 00:20:26.491 "method": "accel_set_options", 00:20:26.491 "params": { 00:20:26.491 "small_cache_size": 128, 00:20:26.491 "large_cache_size": 16, 00:20:26.491 "task_count": 2048, 00:20:26.491 "sequence_count": 2048, 00:20:26.491 "buf_count": 2048 00:20:26.491 } 00:20:26.491 } 00:20:26.491 ] 00:20:26.491 }, 00:20:26.491 { 00:20:26.491 "subsystem": "bdev", 00:20:26.491 "config": [ 00:20:26.491 { 00:20:26.491 "method": "bdev_set_options", 00:20:26.491 "params": { 00:20:26.491 "bdev_io_pool_size": 65535, 00:20:26.491 "bdev_io_cache_size": 256, 00:20:26.491 "bdev_auto_examine": true, 00:20:26.491 "iobuf_small_cache_size": 128, 00:20:26.491 "iobuf_large_cache_size": 16 00:20:26.491 } 00:20:26.491 }, 00:20:26.491 { 00:20:26.491 "method": "bdev_raid_set_options", 00:20:26.491 "params": { 00:20:26.491 "process_window_size_kb": 1024 00:20:26.491 } 00:20:26.491 }, 00:20:26.491 { 00:20:26.491 "method": "bdev_iscsi_set_options", 00:20:26.491 "params": { 00:20:26.491 "timeout_sec": 30 00:20:26.491 } 00:20:26.491 }, 00:20:26.491 { 00:20:26.491 "method": "bdev_nvme_set_options", 00:20:26.491 "params": { 00:20:26.491 "action_on_timeout": "none", 00:20:26.491 "timeout_us": 0, 00:20:26.491 "timeout_admin_us": 0, 00:20:26.491 "keep_alive_timeout_ms": 10000, 00:20:26.491 "arbitration_burst": 0, 00:20:26.491 "low_priority_weight": 0, 00:20:26.491 "medium_priority_weight": 0, 00:20:26.491 "high_priority_weight": 0, 00:20:26.491 "nvme_adminq_poll_period_us": 10000, 00:20:26.491 "nvme_ioq_poll_period_us": 0, 00:20:26.492 "io_queue_requests": 512, 00:20:26.492 "delay_cmd_submit": true, 00:20:26.492 "transport_retry_count": 4, 00:20:26.492 "bdev_retry_count": 3, 00:20:26.492 "transport_ack_timeout": 0, 00:20:26.492 "ctrlr_loss_timeout_sec": 0, 00:20:26.492 "reconnect_delay_sec": 0, 00:20:26.492 "fast_io_fail_timeout_sec": 0, 00:20:26.492 "disable_auto_failback": false, 00:20:26.492 "generate_uuids": false, 00:20:26.492 "transport_tos": 0, 00:20:26.492 "nvme_error_stat": false, 00:20:26.492 "rdma_srq_size": 0, 00:20:26.492 "io_path_stat": false, 00:20:26.492 "allow_accel_sequence": false, 00:20:26.492 "rdma_max_cq_size": 0, 00:20:26.492 "rdma_cm_event_timeout_ms": 0, 00:20:26.492 "dhchap_digests": [ 00:20:26.492 "sha256", 00:20:26.492 "sha384", 00:20:26.492 "sha512" 00:20:26.492 ], 00:20:26.492 "dhchap_dhgroups": [ 00:20:26.492 "null", 00:20:26.492 "ffdhe2048", 00:20:26.492 "ffdhe3072", 00:20:26.492 "ffdhe4096", 00:20:26.492 "ffdhe6144", 00:20:26.492 "ffdhe8192" 00:20:26.492 ] 00:20:26.492 } 00:20:26.492 }, 00:20:26.492 { 00:20:26.492 "method": "bdev_nvme_attach_controller", 00:20:26.492 "params": { 00:20:26.492 "name": "TLSTEST", 00:20:26.492 "trtype": "TCP", 00:20:26.492 "adrfam": "IPv4", 00:20:26.492 "traddr": "10.0.0.2", 00:20:26.492 "trsvcid": "4420", 00:20:26.492 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.492 "prchk_reftag": false, 00:20:26.492 "prchk_guard": false, 00:20:26.492 "ctrlr_loss_timeout_sec": 0, 00:20:26.492 "reconnect_delay_sec": 0, 00:20:26.492 "fast_io_fail_timeout_sec": 0, 00:20:26.492 "psk": "/tmp/tmp.9Mng7s5y3P", 00:20:26.492 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:26.492 "hdgst": false, 00:20:26.492 "ddgst": false 00:20:26.492 } 00:20:26.492 }, 00:20:26.492 { 00:20:26.492 "method": "bdev_nvme_set_hotplug", 00:20:26.492 "params": { 00:20:26.492 "period_us": 100000, 00:20:26.492 "enable": false 00:20:26.492 } 00:20:26.492 }, 00:20:26.492 { 00:20:26.492 "method": "bdev_wait_for_examine" 00:20:26.492 } 00:20:26.492 ] 00:20:26.492 }, 00:20:26.492 { 00:20:26.492 "subsystem": "nbd", 00:20:26.492 "config": [] 00:20:26.492 } 00:20:26.492 ] 00:20:26.492 }' 00:20:26.492 06:46:31 -- target/tls.sh@199 -- # killprocess 15087 00:20:26.492 06:46:31 -- common/autotest_common.sh@936 -- # '[' -z 15087 ']' 00:20:26.492 06:46:31 -- common/autotest_common.sh@940 -- # kill -0 15087 00:20:26.492 06:46:31 -- common/autotest_common.sh@941 -- # uname 00:20:26.492 06:46:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:26.492 06:46:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 15087 00:20:26.492 06:46:31 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:26.492 06:46:31 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:26.492 06:46:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 15087' 00:20:26.492 killing process with pid 15087 00:20:26.492 06:46:31 -- common/autotest_common.sh@955 -- # kill 15087 00:20:26.492 Received shutdown signal, test time was about 10.000000 seconds 00:20:26.492 00:20:26.492 Latency(us) 00:20:26.492 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.492 =================================================================================================================== 00:20:26.492 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:26.492 [2024-04-17 06:46:31.068992] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:26.492 06:46:31 -- common/autotest_common.sh@960 -- # wait 15087 00:20:26.750 06:46:31 -- target/tls.sh@200 -- # killprocess 14804 00:20:26.750 06:46:31 -- common/autotest_common.sh@936 -- # '[' -z 14804 ']' 00:20:26.750 06:46:31 -- common/autotest_common.sh@940 -- # kill -0 14804 00:20:26.750 06:46:31 -- common/autotest_common.sh@941 -- # uname 00:20:26.750 06:46:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:26.750 06:46:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 14804 00:20:26.750 06:46:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:26.750 06:46:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:26.750 06:46:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 14804' 00:20:26.750 killing process with pid 14804 00:20:26.750 06:46:31 -- common/autotest_common.sh@955 -- # kill 14804 00:20:26.750 [2024-04-17 06:46:31.294781] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:26.750 06:46:31 -- common/autotest_common.sh@960 -- # wait 14804 00:20:27.025 06:46:31 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:27.025 06:46:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:27.025 06:46:31 -- target/tls.sh@203 -- # echo '{ 00:20:27.025 "subsystems": [ 00:20:27.025 { 00:20:27.025 "subsystem": "keyring", 00:20:27.025 "config": [] 00:20:27.025 }, 00:20:27.025 { 00:20:27.025 "subsystem": "iobuf", 00:20:27.025 "config": [ 00:20:27.025 { 00:20:27.025 "method": "iobuf_set_options", 00:20:27.025 "params": { 00:20:27.025 "small_pool_count": 8192, 00:20:27.025 "large_pool_count": 1024, 00:20:27.025 "small_bufsize": 8192, 00:20:27.025 "large_bufsize": 135168 00:20:27.025 } 00:20:27.025 } 00:20:27.025 ] 00:20:27.025 }, 00:20:27.025 { 00:20:27.025 "subsystem": "sock", 00:20:27.025 "config": [ 00:20:27.025 { 00:20:27.025 "method": "sock_impl_set_options", 00:20:27.025 "params": { 00:20:27.025 "impl_name": "posix", 00:20:27.025 "recv_buf_size": 2097152, 00:20:27.025 "send_buf_size": 2097152, 00:20:27.025 "enable_recv_pipe": true, 00:20:27.025 "enable_quickack": false, 00:20:27.025 "enable_placement_id": 0, 00:20:27.025 "enable_zerocopy_send_server": true, 00:20:27.025 "enable_zerocopy_send_client": false, 00:20:27.025 "zerocopy_threshold": 0, 00:20:27.025 "tls_version": 0, 00:20:27.025 "enable_ktls": false 00:20:27.025 } 00:20:27.025 }, 00:20:27.025 { 00:20:27.025 "method": "sock_impl_set_options", 00:20:27.025 "params": { 00:20:27.025 "impl_name": "ssl", 00:20:27.025 "recv_buf_size": 4096, 00:20:27.025 "send_buf_size": 4096, 00:20:27.025 "enable_recv_pipe": true, 00:20:27.025 "enable_quickack": false, 00:20:27.025 "enable_placement_id": 0, 00:20:27.025 "enable_zerocopy_send_server": true, 00:20:27.025 "enable_zerocopy_send_client": false, 00:20:27.025 "zerocopy_threshold": 0, 00:20:27.025 "tls_version": 0, 00:20:27.025 "enable_ktls": false 00:20:27.025 } 00:20:27.025 } 00:20:27.025 ] 00:20:27.025 }, 00:20:27.025 { 00:20:27.025 "subsystem": "vmd", 00:20:27.025 "config": [] 00:20:27.025 }, 00:20:27.025 { 00:20:27.025 "subsystem": "accel", 00:20:27.025 "config": [ 00:20:27.025 { 00:20:27.025 "method": "accel_set_options", 00:20:27.025 "params": { 00:20:27.025 "small_cache_size": 128, 00:20:27.025 "large_cache_size": 16, 00:20:27.025 "task_count": 2048, 00:20:27.025 "sequence_count": 2048, 00:20:27.025 "buf_count": 2048 00:20:27.025 } 00:20:27.025 } 00:20:27.025 ] 00:20:27.025 }, 00:20:27.025 { 00:20:27.025 "subsystem": "bdev", 00:20:27.025 "config": [ 00:20:27.025 { 00:20:27.025 "method": "bdev_set_options", 00:20:27.025 "params": { 00:20:27.025 "bdev_io_pool_size": 65535, 00:20:27.025 "bdev_io_cache_size": 256, 00:20:27.025 "bdev_auto_examine": true, 00:20:27.025 "iobuf_small_cache_size": 128, 00:20:27.025 "iobuf_large_cache_size": 16 00:20:27.025 } 00:20:27.025 }, 00:20:27.025 { 00:20:27.025 "method": "bdev_raid_set_options", 00:20:27.025 "params": { 00:20:27.025 "process_window_size_kb": 1024 00:20:27.025 } 00:20:27.025 }, 00:20:27.025 { 00:20:27.025 "method": "bdev_iscsi_set_options", 00:20:27.025 "params": { 00:20:27.025 "timeout_sec": 30 00:20:27.025 } 00:20:27.025 }, 00:20:27.025 { 00:20:27.025 "method": "bdev_nvme_set_options", 00:20:27.025 "params": { 00:20:27.025 "action_on_timeout": "none", 00:20:27.025 "timeout_us": 0, 00:20:27.025 "timeout_admin_us": 0, 00:20:27.025 "keep_alive_timeout_ms": 10000, 00:20:27.025 "arbitration_burst": 0, 00:20:27.025 "low_priority_weight": 0, 00:20:27.025 "medium_priority_weight": 0, 00:20:27.025 "high_priority_weight": 0, 00:20:27.025 "nvme_adminq_poll_period_us": 10000, 00:20:27.025 "nvme_ioq_poll_period_us": 0, 00:20:27.025 "io_queue_requests": 0, 00:20:27.025 "delay_cmd_submit": true, 00:20:27.025 "transport_retry_count": 4, 00:20:27.025 "bdev_retry_count": 3, 00:20:27.026 "transport_ack_timeout": 0, 00:20:27.026 "ctrlr_loss_timeout_sec": 0, 00:20:27.026 "reconnect_delay_sec": 0, 00:20:27.026 "fast_io_fail_timeout_sec": 0, 00:20:27.026 "disable_auto_failback": false, 00:20:27.026 "generate_uuids": false, 00:20:27.026 "transport_tos": 0, 00:20:27.026 "nvme_error_stat": false, 00:20:27.026 "rdma_srq_size": 0, 00:20:27.026 "io_path_stat": false, 00:20:27.026 "allow_accel_sequence": false, 00:20:27.026 "rdma_max_cq_size": 0, 00:20:27.026 "rdma_cm_event_timeout_ms": 0, 00:20:27.026 "dhchap_digests": [ 00:20:27.026 "sha256", 00:20:27.026 "sha384", 00:20:27.026 "sha512" 00:20:27.026 ], 00:20:27.026 "dhchap_dhgroups": [ 00:20:27.026 "null", 00:20:27.026 "ffdhe2048", 00:20:27.026 "ffdhe3072", 00:20:27.026 "ffdhe4096", 00:20:27.026 "ffdhe6144", 00:20:27.026 "ffdhe8192" 00:20:27.026 ] 00:20:27.026 } 00:20:27.026 }, 00:20:27.026 { 00:20:27.026 "method": "bdev_nvme_set_hotplug", 00:20:27.026 "params": { 00:20:27.026 06:46:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:27.026 "period_us": 100000, 00:20:27.026 "enable": false 00:20:27.026 } 00:20:27.026 }, 00:20:27.026 { 00:20:27.026 "method": "bdev_malloc_create", 00:20:27.026 "params": { 00:20:27.026 "name": "malloc0", 00:20:27.026 "num_blocks": 8192, 00:20:27.026 "block_size": 4096, 00:20:27.026 "physical_block_size": 4096, 00:20:27.026 "uuid": "6a808d1f-3921-41a1-9d06-56eea1462ab5", 00:20:27.026 "optimal_io_boundary": 0 00:20:27.026 } 00:20:27.026 }, 00:20:27.026 { 00:20:27.026 "method": "bdev_wait_for_examine" 00:20:27.026 } 00:20:27.026 ] 00:20:27.026 }, 00:20:27.026 { 00:20:27.026 "subsystem": "nbd", 00:20:27.026 "config": [] 00:20:27.026 }, 00:20:27.026 { 00:20:27.026 "subsystem": "scheduler", 00:20:27.026 "config": [ 00:20:27.026 { 00:20:27.026 "method": "framework_set_scheduler", 00:20:27.026 "params": { 00:20:27.026 "name": "static" 00:20:27.026 } 00:20:27.026 } 00:20:27.026 ] 00:20:27.026 }, 00:20:27.026 { 00:20:27.026 "subsystem": "nvmf", 00:20:27.026 "config": [ 00:20:27.026 { 00:20:27.026 "method": "nvmf_set_config", 00:20:27.026 "params": { 00:20:27.026 "discovery_filter": "match_any", 00:20:27.026 "admin_cmd_passthru": { 00:20:27.026 "identify_ctrlr": false 00:20:27.026 } 00:20:27.026 } 00:20:27.026 }, 00:20:27.026 { 00:20:27.026 "method": "nvmf_set_max_subsystems", 00:20:27.026 "params": { 00:20:27.026 "max_subsystems": 1024 00:20:27.026 } 00:20:27.026 }, 00:20:27.026 { 00:20:27.026 "method": "nvmf_set_crdt", 00:20:27.026 "params": { 00:20:27.026 "crdt1": 0, 00:20:27.026 "crdt2": 0, 00:20:27.026 "crdt3": 0 00:20:27.026 } 00:20:27.026 }, 00:20:27.026 { 00:20:27.026 "method": "nvmf_create_transport", 00:20:27.026 "params": { 00:20:27.026 "trtype": "TCP", 00:20:27.026 "max_queue_depth": 128, 00:20:27.026 "max_io_qpairs_per_ctrlr": 127, 00:20:27.026 "in_capsule_data_size": 4096, 00:20:27.026 "max_io_size": 131072, 00:20:27.026 "io_unit_size": 131072, 00:20:27.026 "max_aq_depth": 128, 00:20:27.026 "num_shared_buffers": 511, 00:20:27.026 "buf_cache_size": 4294967295, 00:20:27.026 "dif_insert_or_strip": false, 00:20:27.026 "zcopy": false, 00:20:27.026 "c2h_success": false, 00:20:27.026 "sock_priority": 0, 00:20:27.026 "abort_timeout_sec": 1, 00:20:27.026 "ack_timeout": 0 00:20:27.026 } 00:20:27.026 }, 00:20:27.026 { 00:20:27.026 "method": "nvmf_create_subsystem", 00:20:27.026 "params": { 00:20:27.026 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.026 "allow_any_host": false, 00:20:27.026 "serial_number": "SPDK00000000000001", 00:20:27.026 "model_number": "SPDK bdev Controller", 00:20:27.026 "max_namespaces": 10, 00:20:27.026 "min_cntlid": 1, 00:20:27.026 "max_cntlid": 65519, 00:20:27.026 "ana_reporting": false 00:20:27.026 } 00:20:27.026 }, 00:20:27.026 { 00:20:27.026 "method": "nvmf_subsystem_add_host", 00:20:27.026 "params": { 00:20:27.026 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.026 "host": "nqn.2016-06.io.spdk:host1", 00:20:27.026 "psk": "/tmp/tmp.9Mng7s5y3P" 00:20:27.026 } 00:20:27.026 }, 00:20:27.026 { 00:20:27.026 "method": "nvmf_subsystem_add_ns", 00:20:27.026 "params": { 00:20:27.026 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.026 "namespace": { 00:20:27.026 "nsid": 1, 00:20:27.026 "bdev_name": "malloc0", 00:20:27.026 "nguid": "6A808D1F392141A19D0656EEA1462AB5", 00:20:27.026 "uuid": "6a808d1f-3921-41a1-9d06-56eea1462ab5", 00:20:27.026 "no_auto_visible": false 00:20:27.026 } 00:20:27.026 } 00:20:27.026 }, 00:20:27.026 { 00:20:27.026 "method": "nvmf_subsystem_add_listener", 00:20:27.026 "params": { 00:20:27.026 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.026 "listen_address": { 00:20:27.026 "trtype": "TCP", 00:20:27.027 "adrfam": "IPv4", 00:20:27.027 "traddr": "10.0.0.2", 00:20:27.027 "trsvcid": "4420" 00:20:27.027 }, 00:20:27.027 "secure_channel": true 00:20:27.027 } 00:20:27.027 } 00:20:27.027 ] 00:20:27.027 } 00:20:27.027 ] 00:20:27.027 }' 00:20:27.027 06:46:31 -- common/autotest_common.sh@10 -- # set +x 00:20:27.027 06:46:31 -- nvmf/common.sh@470 -- # nvmfpid=15245 00:20:27.027 06:46:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:27.027 06:46:31 -- nvmf/common.sh@471 -- # waitforlisten 15245 00:20:27.027 06:46:31 -- common/autotest_common.sh@817 -- # '[' -z 15245 ']' 00:20:27.027 06:46:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.027 06:46:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:27.027 06:46:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.027 06:46:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:27.027 06:46:31 -- common/autotest_common.sh@10 -- # set +x 00:20:27.027 [2024-04-17 06:46:31.598978] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:20:27.027 [2024-04-17 06:46:31.599077] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:27.323 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.324 [2024-04-17 06:46:31.668263] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.324 [2024-04-17 06:46:31.757441] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:27.324 [2024-04-17 06:46:31.757508] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:27.324 [2024-04-17 06:46:31.757533] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:27.324 [2024-04-17 06:46:31.757546] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:27.324 [2024-04-17 06:46:31.757559] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:27.324 [2024-04-17 06:46:31.757661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:27.581 [2024-04-17 06:46:31.985664] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.581 [2024-04-17 06:46:32.001621] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:27.581 [2024-04-17 06:46:32.017679] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:27.581 [2024-04-17 06:46:32.025383] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:28.147 06:46:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:28.147 06:46:32 -- common/autotest_common.sh@850 -- # return 0 00:20:28.147 06:46:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:28.147 06:46:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:28.147 06:46:32 -- common/autotest_common.sh@10 -- # set +x 00:20:28.147 06:46:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:28.147 06:46:32 -- target/tls.sh@207 -- # bdevperf_pid=15410 00:20:28.147 06:46:32 -- target/tls.sh@208 -- # waitforlisten 15410 /var/tmp/bdevperf.sock 00:20:28.147 06:46:32 -- common/autotest_common.sh@817 -- # '[' -z 15410 ']' 00:20:28.147 06:46:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:28.147 06:46:32 -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:28.147 06:46:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:28.147 06:46:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:28.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:28.147 06:46:32 -- target/tls.sh@204 -- # echo '{ 00:20:28.147 "subsystems": [ 00:20:28.147 { 00:20:28.147 "subsystem": "keyring", 00:20:28.147 "config": [] 00:20:28.147 }, 00:20:28.147 { 00:20:28.147 "subsystem": "iobuf", 00:20:28.147 "config": [ 00:20:28.147 { 00:20:28.147 "method": "iobuf_set_options", 00:20:28.147 "params": { 00:20:28.147 "small_pool_count": 8192, 00:20:28.147 "large_pool_count": 1024, 00:20:28.147 "small_bufsize": 8192, 00:20:28.147 "large_bufsize": 135168 00:20:28.147 } 00:20:28.147 } 00:20:28.147 ] 00:20:28.147 }, 00:20:28.147 { 00:20:28.147 "subsystem": "sock", 00:20:28.147 "config": [ 00:20:28.147 { 00:20:28.147 "method": "sock_impl_set_options", 00:20:28.147 "params": { 00:20:28.147 "impl_name": "posix", 00:20:28.147 "recv_buf_size": 2097152, 00:20:28.147 "send_buf_size": 2097152, 00:20:28.147 "enable_recv_pipe": true, 00:20:28.147 "enable_quickack": false, 00:20:28.147 "enable_placement_id": 0, 00:20:28.147 "enable_zerocopy_send_server": true, 00:20:28.147 "enable_zerocopy_send_client": false, 00:20:28.147 "zerocopy_threshold": 0, 00:20:28.147 "tls_version": 0, 00:20:28.147 "enable_ktls": false 00:20:28.147 } 00:20:28.147 }, 00:20:28.147 { 00:20:28.147 "method": "sock_impl_set_options", 00:20:28.147 "params": { 00:20:28.147 "impl_name": "ssl", 00:20:28.147 "recv_buf_size": 4096, 00:20:28.147 "send_buf_size": 4096, 00:20:28.147 "enable_recv_pipe": true, 00:20:28.147 "enable_quickack": false, 00:20:28.147 "enable_placement_id": 0, 00:20:28.147 "enable_zerocopy_send_server": true, 00:20:28.147 "enable_zerocopy_send_client": false, 00:20:28.147 "zerocopy_threshold": 0, 00:20:28.147 "tls_version": 0, 00:20:28.147 "enable_ktls": false 00:20:28.147 } 00:20:28.147 } 00:20:28.147 ] 00:20:28.147 }, 00:20:28.147 { 00:20:28.147 "subsystem": "vmd", 00:20:28.147 "config": [] 00:20:28.147 }, 00:20:28.147 { 00:20:28.147 "subsystem": "accel", 00:20:28.147 "config": [ 00:20:28.147 { 00:20:28.147 "method": "accel_set_options", 00:20:28.147 "params": { 00:20:28.147 "small_cache_size": 128, 00:20:28.147 "large_cache_size": 16, 00:20:28.147 "task_count": 2048, 00:20:28.147 "sequence_count": 2048, 00:20:28.147 "buf_count": 2048 00:20:28.147 } 00:20:28.147 } 00:20:28.147 ] 00:20:28.147 }, 00:20:28.147 { 00:20:28.147 "subsystem": "bdev", 00:20:28.147 "config": [ 00:20:28.147 { 00:20:28.147 "method": "bdev_set_options", 00:20:28.147 "params": { 00:20:28.147 "bdev_io_pool_size": 65535, 00:20:28.147 "bdev_io_cache_size": 256, 00:20:28.147 "bdev_auto_examine": true, 00:20:28.147 "iobuf_small_cache_size": 128, 00:20:28.147 "iobuf_large_cache_size": 16 00:20:28.147 } 00:20:28.147 }, 00:20:28.147 { 00:20:28.147 "method": "bdev_raid_set_options", 00:20:28.147 "params": { 00:20:28.147 "process_window_size_kb": 1024 00:20:28.147 } 00:20:28.147 }, 00:20:28.147 { 00:20:28.147 "method": "bdev_iscsi_set_options", 00:20:28.147 "params": { 00:20:28.147 "timeout_sec": 30 00:20:28.147 } 00:20:28.147 }, 00:20:28.147 { 00:20:28.147 "method": "bdev_nvme_set_options", 00:20:28.147 "params": { 00:20:28.147 "action_on_timeout": "none", 00:20:28.147 "timeout_us": 0, 00:20:28.147 "timeout_admin_us": 0, 00:20:28.147 "keep_alive_timeout_ms": 10000, 00:20:28.147 "arbitration_burst": 0, 00:20:28.147 "low_priority_weight": 0, 00:20:28.147 "medium_priority_weight": 0, 00:20:28.147 "high_priority_weight": 0, 00:20:28.147 "nvme_adminq_poll_period_us": 10000, 00:20:28.147 "nvme_ioq_poll_period_us": 0, 00:20:28.147 "io_queue_requests": 512, 00:20:28.147 "delay_cmd_submit": true, 00:20:28.147 "transport_retry_count": 4, 00:20:28.147 "bdev_retry_count": 3, 00:20:28.147 "transport_ack_timeout": 0, 00:20:28.147 "ctrlr_loss_timeout_sec": 0, 00:20:28.147 "reconnect_delay_sec": 0, 00:20:28.147 "fast_io_fail_timeout_sec": 0, 00:20:28.147 "disable_auto_failback": false, 00:20:28.147 "generate_uuids": false, 00:20:28.147 "transport_tos": 0, 00:20:28.147 "nvme_error_stat": false, 00:20:28.147 "rdma_srq_size": 0, 00:20:28.147 "io_path_stat": false, 00:20:28.147 "allow_accel_sequence": false, 00:20:28.147 "rdma_max_cq_size": 0, 00:20:28.147 "rdma_cm_event_timeout_ms": 0, 00:20:28.147 "dhchap_digests": [ 00:20:28.147 "sha256", 00:20:28.147 "sha384", 00:20:28.147 "sha512" 00:20:28.147 ], 00:20:28.147 "dhchap_dhgroups": [ 00:20:28.147 "null", 00:20:28.147 "ffdhe2048", 00:20:28.147 "ffdhe3072", 00:20:28.147 "ffdhe4096", 00:20:28.147 "ffdhe6144", 00:20:28.147 "ffdhe8192" 00:20:28.147 ] 00:20:28.147 } 00:20:28.147 }, 00:20:28.147 { 00:20:28.147 "method": "bdev_nvme_attach_controller", 00:20:28.147 "params": { 00:20:28.147 "name": "TLSTEST", 00:20:28.147 "trtype": "TCP", 00:20:28.147 "adrfam": "IPv4", 00:20:28.147 "traddr": "10.0.0.2", 00:20:28.147 "trsvcid": "4420", 00:20:28.147 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.147 "prchk_reftag": false, 00:20:28.147 "prchk_guard": false, 00:20:28.147 "ctrlr_loss_timeout_sec": 0, 00:20:28.147 "reconnect_delay_sec": 0, 00:20:28.147 "fast_io_fail_timeout_sec": 0, 00:20:28.148 "psk": "/tmp/tmp.9Mng7s5y3P", 00:20:28.148 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:28.148 "hdgst": false, 00:20:28.148 "ddgst": false 00:20:28.148 } 00:20:28.148 }, 00:20:28.148 { 00:20:28.148 "method": "bdev_nvme_set_hotplug", 00:20:28.148 "params": { 00:20:28.148 "period_us": 100000, 00:20:28.148 "enable": false 00:20:28.148 } 00:20:28.148 }, 00:20:28.148 { 00:20:28.148 "method": "bdev_wait_for_examine" 00:20:28.148 } 00:20:28.148 ] 00:20:28.148 }, 00:20:28.148 { 00:20:28.148 "subsystem": "nbd", 00:20:28.148 "config": [] 00:20:28.148 } 00:20:28.148 ] 00:20:28.148 }' 00:20:28.148 06:46:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:28.148 06:46:32 -- common/autotest_common.sh@10 -- # set +x 00:20:28.148 [2024-04-17 06:46:32.593284] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:20:28.148 [2024-04-17 06:46:32.593376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid15410 ] 00:20:28.148 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.148 [2024-04-17 06:46:32.652718] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.148 [2024-04-17 06:46:32.736596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:28.406 [2024-04-17 06:46:32.895737] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:28.406 [2024-04-17 06:46:32.895849] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:28.972 06:46:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:28.972 06:46:33 -- common/autotest_common.sh@850 -- # return 0 00:20:28.972 06:46:33 -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:29.229 Running I/O for 10 seconds... 00:20:39.195 00:20:39.195 Latency(us) 00:20:39.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.195 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:39.195 Verification LBA range: start 0x0 length 0x2000 00:20:39.195 TLSTESTn1 : 10.09 1359.27 5.31 0.00 0.00 93758.11 6553.60 70293.43 00:20:39.195 =================================================================================================================== 00:20:39.195 Total : 1359.27 5.31 0.00 0.00 93758.11 6553.60 70293.43 00:20:39.195 0 00:20:39.195 06:46:43 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:39.195 06:46:43 -- target/tls.sh@214 -- # killprocess 15410 00:20:39.195 06:46:43 -- common/autotest_common.sh@936 -- # '[' -z 15410 ']' 00:20:39.195 06:46:43 -- common/autotest_common.sh@940 -- # kill -0 15410 00:20:39.195 06:46:43 -- common/autotest_common.sh@941 -- # uname 00:20:39.195 06:46:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:39.195 06:46:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 15410 00:20:39.453 06:46:43 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:39.453 06:46:43 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:39.453 06:46:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 15410' 00:20:39.453 killing process with pid 15410 00:20:39.453 06:46:43 -- common/autotest_common.sh@955 -- # kill 15410 00:20:39.453 Received shutdown signal, test time was about 10.000000 seconds 00:20:39.453 00:20:39.453 Latency(us) 00:20:39.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.453 =================================================================================================================== 00:20:39.453 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:39.453 [2024-04-17 06:46:43.815582] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:39.453 06:46:43 -- common/autotest_common.sh@960 -- # wait 15410 00:20:39.453 06:46:44 -- target/tls.sh@215 -- # killprocess 15245 00:20:39.453 06:46:44 -- common/autotest_common.sh@936 -- # '[' -z 15245 ']' 00:20:39.453 06:46:44 -- common/autotest_common.sh@940 -- # kill -0 15245 00:20:39.453 06:46:44 -- common/autotest_common.sh@941 -- # uname 00:20:39.453 06:46:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:39.453 06:46:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 15245 00:20:39.712 06:46:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:39.713 06:46:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:39.713 06:46:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 15245' 00:20:39.713 killing process with pid 15245 00:20:39.713 06:46:44 -- common/autotest_common.sh@955 -- # kill 15245 00:20:39.713 [2024-04-17 06:46:44.068304] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:39.713 06:46:44 -- common/autotest_common.sh@960 -- # wait 15245 00:20:39.989 06:46:44 -- target/tls.sh@218 -- # nvmfappstart 00:20:39.989 06:46:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:39.989 06:46:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:39.989 06:46:44 -- common/autotest_common.sh@10 -- # set +x 00:20:39.989 06:46:44 -- nvmf/common.sh@470 -- # nvmfpid=16741 00:20:39.989 06:46:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:39.989 06:46:44 -- nvmf/common.sh@471 -- # waitforlisten 16741 00:20:39.989 06:46:44 -- common/autotest_common.sh@817 -- # '[' -z 16741 ']' 00:20:39.989 06:46:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.989 06:46:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:39.989 06:46:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.989 06:46:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:39.989 06:46:44 -- common/autotest_common.sh@10 -- # set +x 00:20:39.989 [2024-04-17 06:46:44.369372] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:20:39.989 [2024-04-17 06:46:44.369463] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.989 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.989 [2024-04-17 06:46:44.434855] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.989 [2024-04-17 06:46:44.521392] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.989 [2024-04-17 06:46:44.521474] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.989 [2024-04-17 06:46:44.521487] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.989 [2024-04-17 06:46:44.521499] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.989 [2024-04-17 06:46:44.521517] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.989 [2024-04-17 06:46:44.521543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.253 06:46:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:40.253 06:46:44 -- common/autotest_common.sh@850 -- # return 0 00:20:40.253 06:46:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:40.253 06:46:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:40.253 06:46:44 -- common/autotest_common.sh@10 -- # set +x 00:20:40.253 06:46:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.253 06:46:44 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.9Mng7s5y3P 00:20:40.253 06:46:44 -- target/tls.sh@49 -- # local key=/tmp/tmp.9Mng7s5y3P 00:20:40.253 06:46:44 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:40.510 [2024-04-17 06:46:44.875809] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.510 06:46:44 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:40.768 06:46:45 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:40.768 [2024-04-17 06:46:45.357111] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:40.768 [2024-04-17 06:46:45.357407] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.768 06:46:45 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:41.025 malloc0 00:20:41.283 06:46:45 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:41.283 06:46:45 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9Mng7s5y3P 00:20:41.540 [2024-04-17 06:46:46.115247] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:41.540 06:46:46 -- target/tls.sh@222 -- # bdevperf_pid=17021 00:20:41.540 06:46:46 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:41.540 06:46:46 -- target/tls.sh@225 -- # waitforlisten 17021 /var/tmp/bdevperf.sock 00:20:41.540 06:46:46 -- common/autotest_common.sh@817 -- # '[' -z 17021 ']' 00:20:41.540 06:46:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:41.540 06:46:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:41.540 06:46:46 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:41.540 06:46:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:41.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:41.540 06:46:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:41.540 06:46:46 -- common/autotest_common.sh@10 -- # set +x 00:20:41.799 [2024-04-17 06:46:46.174166] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:20:41.799 [2024-04-17 06:46:46.174263] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid17021 ] 00:20:41.799 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.799 [2024-04-17 06:46:46.233376] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.799 [2024-04-17 06:46:46.316348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.056 06:46:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:42.056 06:46:46 -- common/autotest_common.sh@850 -- # return 0 00:20:42.056 06:46:46 -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9Mng7s5y3P 00:20:42.313 06:46:46 -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:42.571 [2024-04-17 06:46:46.924299] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:42.571 nvme0n1 00:20:42.571 06:46:47 -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:42.571 Running I/O for 1 seconds... 00:20:43.944 00:20:43.944 Latency(us) 00:20:43.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.944 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:43.944 Verification LBA range: start 0x0 length 0x2000 00:20:43.944 nvme0n1 : 1.04 2439.55 9.53 0.00 0.00 51527.03 6893.42 79225.74 00:20:43.944 =================================================================================================================== 00:20:43.944 Total : 2439.55 9.53 0.00 0.00 51527.03 6893.42 79225.74 00:20:43.944 0 00:20:43.944 06:46:48 -- target/tls.sh@234 -- # killprocess 17021 00:20:43.944 06:46:48 -- common/autotest_common.sh@936 -- # '[' -z 17021 ']' 00:20:43.944 06:46:48 -- common/autotest_common.sh@940 -- # kill -0 17021 00:20:43.944 06:46:48 -- common/autotest_common.sh@941 -- # uname 00:20:43.944 06:46:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:43.944 06:46:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 17021 00:20:43.944 06:46:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:43.944 06:46:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:43.944 06:46:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 17021' 00:20:43.944 killing process with pid 17021 00:20:43.944 06:46:48 -- common/autotest_common.sh@955 -- # kill 17021 00:20:43.944 Received shutdown signal, test time was about 1.000000 seconds 00:20:43.944 00:20:43.944 Latency(us) 00:20:43.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.944 =================================================================================================================== 00:20:43.944 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:43.944 06:46:48 -- common/autotest_common.sh@960 -- # wait 17021 00:20:43.944 06:46:48 -- target/tls.sh@235 -- # killprocess 16741 00:20:43.944 06:46:48 -- common/autotest_common.sh@936 -- # '[' -z 16741 ']' 00:20:43.944 06:46:48 -- common/autotest_common.sh@940 -- # kill -0 16741 00:20:43.944 06:46:48 -- common/autotest_common.sh@941 -- # uname 00:20:43.945 06:46:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:43.945 06:46:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 16741 00:20:43.945 06:46:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:43.945 06:46:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:43.945 06:46:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 16741' 00:20:43.945 killing process with pid 16741 00:20:43.945 06:46:48 -- common/autotest_common.sh@955 -- # kill 16741 00:20:43.945 [2024-04-17 06:46:48.456317] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:43.945 06:46:48 -- common/autotest_common.sh@960 -- # wait 16741 00:20:44.203 06:46:48 -- target/tls.sh@238 -- # nvmfappstart 00:20:44.203 06:46:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:44.203 06:46:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:44.203 06:46:48 -- common/autotest_common.sh@10 -- # set +x 00:20:44.203 06:46:48 -- nvmf/common.sh@470 -- # nvmfpid=17303 00:20:44.203 06:46:48 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:44.203 06:46:48 -- nvmf/common.sh@471 -- # waitforlisten 17303 00:20:44.203 06:46:48 -- common/autotest_common.sh@817 -- # '[' -z 17303 ']' 00:20:44.203 06:46:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.203 06:46:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:44.203 06:46:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.203 06:46:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:44.203 06:46:48 -- common/autotest_common.sh@10 -- # set +x 00:20:44.203 [2024-04-17 06:46:48.755394] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:20:44.203 [2024-04-17 06:46:48.755482] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.203 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.461 [2024-04-17 06:46:48.825145] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.461 [2024-04-17 06:46:48.914221] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.461 [2024-04-17 06:46:48.914269] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.461 [2024-04-17 06:46:48.914291] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.461 [2024-04-17 06:46:48.914305] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.461 [2024-04-17 06:46:48.914316] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.461 [2024-04-17 06:46:48.914350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.461 06:46:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:44.461 06:46:49 -- common/autotest_common.sh@850 -- # return 0 00:20:44.461 06:46:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:44.461 06:46:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:44.461 06:46:49 -- common/autotest_common.sh@10 -- # set +x 00:20:44.461 06:46:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.461 06:46:49 -- target/tls.sh@239 -- # rpc_cmd 00:20:44.461 06:46:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.461 06:46:49 -- common/autotest_common.sh@10 -- # set +x 00:20:44.461 [2024-04-17 06:46:49.061426] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.720 malloc0 00:20:44.720 [2024-04-17 06:46:49.094101] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:44.720 [2024-04-17 06:46:49.094388] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.720 06:46:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.720 06:46:49 -- target/tls.sh@252 -- # bdevperf_pid=17442 00:20:44.720 06:46:49 -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:44.720 06:46:49 -- target/tls.sh@254 -- # waitforlisten 17442 /var/tmp/bdevperf.sock 00:20:44.720 06:46:49 -- common/autotest_common.sh@817 -- # '[' -z 17442 ']' 00:20:44.720 06:46:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:44.720 06:46:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:44.720 06:46:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:44.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:44.720 06:46:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:44.720 06:46:49 -- common/autotest_common.sh@10 -- # set +x 00:20:44.720 [2024-04-17 06:46:49.162081] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:20:44.720 [2024-04-17 06:46:49.162149] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid17442 ] 00:20:44.720 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.720 [2024-04-17 06:46:49.224392] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.720 [2024-04-17 06:46:49.312465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.978 06:46:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:44.978 06:46:49 -- common/autotest_common.sh@850 -- # return 0 00:20:44.978 06:46:49 -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9Mng7s5y3P 00:20:45.236 06:46:49 -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:45.494 [2024-04-17 06:46:49.889270] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:45.494 nvme0n1 00:20:45.494 06:46:49 -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:45.494 Running I/O for 1 seconds... 00:20:46.866 00:20:46.866 Latency(us) 00:20:46.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.866 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:46.866 Verification LBA range: start 0x0 length 0x2000 00:20:46.866 nvme0n1 : 1.05 2524.47 9.86 0.00 0.00 49695.03 7281.78 72235.24 00:20:46.866 =================================================================================================================== 00:20:46.866 Total : 2524.47 9.86 0.00 0.00 49695.03 7281.78 72235.24 00:20:46.866 0 00:20:46.866 06:46:51 -- target/tls.sh@263 -- # rpc_cmd save_config 00:20:46.866 06:46:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.866 06:46:51 -- common/autotest_common.sh@10 -- # set +x 00:20:46.866 06:46:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.866 06:46:51 -- target/tls.sh@263 -- # tgtcfg='{ 00:20:46.866 "subsystems": [ 00:20:46.866 { 00:20:46.866 "subsystem": "keyring", 00:20:46.866 "config": [ 00:20:46.866 { 00:20:46.866 "method": "keyring_file_add_key", 00:20:46.866 "params": { 00:20:46.866 "name": "key0", 00:20:46.866 "path": "/tmp/tmp.9Mng7s5y3P" 00:20:46.866 } 00:20:46.866 } 00:20:46.866 ] 00:20:46.866 }, 00:20:46.866 { 00:20:46.866 "subsystem": "iobuf", 00:20:46.866 "config": [ 00:20:46.866 { 00:20:46.866 "method": "iobuf_set_options", 00:20:46.866 "params": { 00:20:46.866 "small_pool_count": 8192, 00:20:46.866 "large_pool_count": 1024, 00:20:46.866 "small_bufsize": 8192, 00:20:46.866 "large_bufsize": 135168 00:20:46.866 } 00:20:46.866 } 00:20:46.866 ] 00:20:46.866 }, 00:20:46.866 { 00:20:46.866 "subsystem": "sock", 00:20:46.866 "config": [ 00:20:46.867 { 00:20:46.867 "method": "sock_impl_set_options", 00:20:46.867 "params": { 00:20:46.867 "impl_name": "posix", 00:20:46.867 "recv_buf_size": 2097152, 00:20:46.867 "send_buf_size": 2097152, 00:20:46.867 "enable_recv_pipe": true, 00:20:46.867 "enable_quickack": false, 00:20:46.867 "enable_placement_id": 0, 00:20:46.867 "enable_zerocopy_send_server": true, 00:20:46.867 "enable_zerocopy_send_client": false, 00:20:46.867 "zerocopy_threshold": 0, 00:20:46.867 "tls_version": 0, 00:20:46.867 "enable_ktls": false 00:20:46.867 } 00:20:46.867 }, 00:20:46.867 { 00:20:46.867 "method": "sock_impl_set_options", 00:20:46.867 "params": { 00:20:46.867 "impl_name": "ssl", 00:20:46.867 "recv_buf_size": 4096, 00:20:46.867 "send_buf_size": 4096, 00:20:46.867 "enable_recv_pipe": true, 00:20:46.867 "enable_quickack": false, 00:20:46.867 "enable_placement_id": 0, 00:20:46.867 "enable_zerocopy_send_server": true, 00:20:46.867 "enable_zerocopy_send_client": false, 00:20:46.867 "zerocopy_threshold": 0, 00:20:46.867 "tls_version": 0, 00:20:46.867 "enable_ktls": false 00:20:46.867 } 00:20:46.867 } 00:20:46.867 ] 00:20:46.867 }, 00:20:46.867 { 00:20:46.867 "subsystem": "vmd", 00:20:46.867 "config": [] 00:20:46.867 }, 00:20:46.867 { 00:20:46.867 "subsystem": "accel", 00:20:46.867 "config": [ 00:20:46.867 { 00:20:46.867 "method": "accel_set_options", 00:20:46.867 "params": { 00:20:46.867 "small_cache_size": 128, 00:20:46.867 "large_cache_size": 16, 00:20:46.867 "task_count": 2048, 00:20:46.867 "sequence_count": 2048, 00:20:46.867 "buf_count": 2048 00:20:46.867 } 00:20:46.867 } 00:20:46.867 ] 00:20:46.867 }, 00:20:46.867 { 00:20:46.867 "subsystem": "bdev", 00:20:46.867 "config": [ 00:20:46.867 { 00:20:46.867 "method": "bdev_set_options", 00:20:46.867 "params": { 00:20:46.867 "bdev_io_pool_size": 65535, 00:20:46.867 "bdev_io_cache_size": 256, 00:20:46.867 "bdev_auto_examine": true, 00:20:46.867 "iobuf_small_cache_size": 128, 00:20:46.867 "iobuf_large_cache_size": 16 00:20:46.867 } 00:20:46.867 }, 00:20:46.867 { 00:20:46.867 "method": "bdev_raid_set_options", 00:20:46.867 "params": { 00:20:46.867 "process_window_size_kb": 1024 00:20:46.867 } 00:20:46.867 }, 00:20:46.867 { 00:20:46.867 "method": "bdev_iscsi_set_options", 00:20:46.867 "params": { 00:20:46.867 "timeout_sec": 30 00:20:46.867 } 00:20:46.867 }, 00:20:46.867 { 00:20:46.867 "method": "bdev_nvme_set_options", 00:20:46.867 "params": { 00:20:46.867 "action_on_timeout": "none", 00:20:46.867 "timeout_us": 0, 00:20:46.867 "timeout_admin_us": 0, 00:20:46.867 "keep_alive_timeout_ms": 10000, 00:20:46.867 "arbitration_burst": 0, 00:20:46.867 "low_priority_weight": 0, 00:20:46.867 "medium_priority_weight": 0, 00:20:46.867 "high_priority_weight": 0, 00:20:46.867 "nvme_adminq_poll_period_us": 10000, 00:20:46.867 "nvme_ioq_poll_period_us": 0, 00:20:46.867 "io_queue_requests": 0, 00:20:46.867 "delay_cmd_submit": true, 00:20:46.867 "transport_retry_count": 4, 00:20:46.867 "bdev_retry_count": 3, 00:20:46.867 "transport_ack_timeout": 0, 00:20:46.867 "ctrlr_loss_timeout_sec": 0, 00:20:46.867 "reconnect_delay_sec": 0, 00:20:46.867 "fast_io_fail_timeout_sec": 0, 00:20:46.867 "disable_auto_failback": false, 00:20:46.867 "generate_uuids": false, 00:20:46.867 "transport_tos": 0, 00:20:46.867 "nvme_error_stat": false, 00:20:46.867 "rdma_srq_size": 0, 00:20:46.867 "io_path_stat": false, 00:20:46.867 "allow_accel_sequence": false, 00:20:46.867 "rdma_max_cq_size": 0, 00:20:46.867 "rdma_cm_event_timeout_ms": 0, 00:20:46.867 "dhchap_digests": [ 00:20:46.867 "sha256", 00:20:46.867 "sha384", 00:20:46.867 "sha512" 00:20:46.867 ], 00:20:46.867 "dhchap_dhgroups": [ 00:20:46.867 "null", 00:20:46.867 "ffdhe2048", 00:20:46.867 "ffdhe3072", 00:20:46.867 "ffdhe4096", 00:20:46.867 "ffdhe6144", 00:20:46.867 "ffdhe8192" 00:20:46.867 ] 00:20:46.867 } 00:20:46.867 }, 00:20:46.867 { 00:20:46.867 "method": "bdev_nvme_set_hotplug", 00:20:46.867 "params": { 00:20:46.867 "period_us": 100000, 00:20:46.867 "enable": false 00:20:46.867 } 00:20:46.867 }, 00:20:46.867 { 00:20:46.867 "method": "bdev_malloc_create", 00:20:46.867 "params": { 00:20:46.867 "name": "malloc0", 00:20:46.867 "num_blocks": 8192, 00:20:46.867 "block_size": 4096, 00:20:46.867 "physical_block_size": 4096, 00:20:46.867 "uuid": "09aa0a15-0a5b-4223-8d54-924da20b0254", 00:20:46.867 "optimal_io_boundary": 0 00:20:46.867 } 00:20:46.867 }, 00:20:46.867 { 00:20:46.867 "method": "bdev_wait_for_examine" 00:20:46.867 } 00:20:46.867 ] 00:20:46.867 }, 00:20:46.867 { 00:20:46.867 "subsystem": "nbd", 00:20:46.867 "config": [] 00:20:46.867 }, 00:20:46.867 { 00:20:46.867 "subsystem": "scheduler", 00:20:46.867 "config": [ 00:20:46.867 { 00:20:46.867 "method": "framework_set_scheduler", 00:20:46.867 "params": { 00:20:46.867 "name": "static" 00:20:46.867 } 00:20:46.867 } 00:20:46.867 ] 00:20:46.867 }, 00:20:46.867 { 00:20:46.867 "subsystem": "nvmf", 00:20:46.867 "config": [ 00:20:46.867 { 00:20:46.867 "method": "nvmf_set_config", 00:20:46.867 "params": { 00:20:46.867 "discovery_filter": "match_any", 00:20:46.867 "admin_cmd_passthru": { 00:20:46.867 "identify_ctrlr": false 00:20:46.867 } 00:20:46.867 } 00:20:46.867 }, 00:20:46.867 { 00:20:46.867 "method": "nvmf_set_max_subsystems", 00:20:46.867 "params": { 00:20:46.867 "max_subsystems": 1024 00:20:46.867 } 00:20:46.867 }, 00:20:46.867 { 00:20:46.867 "method": "nvmf_set_crdt", 00:20:46.867 "params": { 00:20:46.867 "crdt1": 0, 00:20:46.867 "crdt2": 0, 00:20:46.867 "crdt3": 0 00:20:46.867 } 00:20:46.867 }, 00:20:46.867 { 00:20:46.867 "method": "nvmf_create_transport", 00:20:46.867 "params": { 00:20:46.867 "trtype": "TCP", 00:20:46.867 "max_queue_depth": 128, 00:20:46.867 "max_io_qpairs_per_ctrlr": 127, 00:20:46.867 "in_capsule_data_size": 4096, 00:20:46.867 "max_io_size": 131072, 00:20:46.867 "io_unit_size": 131072, 00:20:46.867 "max_aq_depth": 128, 00:20:46.867 "num_shared_buffers": 511, 00:20:46.867 "buf_cache_size": 4294967295, 00:20:46.867 "dif_insert_or_strip": false, 00:20:46.867 "zcopy": false, 00:20:46.867 "c2h_success": false, 00:20:46.867 "sock_priority": 0, 00:20:46.867 "abort_timeout_sec": 1, 00:20:46.867 "ack_timeout": 0 00:20:46.867 } 00:20:46.867 }, 00:20:46.867 { 00:20:46.867 "method": "nvmf_create_subsystem", 00:20:46.867 "params": { 00:20:46.867 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.867 "allow_any_host": false, 00:20:46.867 "serial_number": "00000000000000000000", 00:20:46.867 "model_number": "SPDK bdev Controller", 00:20:46.867 "max_namespaces": 32, 00:20:46.867 "min_cntlid": 1, 00:20:46.867 "max_cntlid": 65519, 00:20:46.867 "ana_reporting": false 00:20:46.867 } 00:20:46.867 }, 00:20:46.867 { 00:20:46.867 "method": "nvmf_subsystem_add_host", 00:20:46.867 "params": { 00:20:46.867 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.867 "host": "nqn.2016-06.io.spdk:host1", 00:20:46.867 "psk": "key0" 00:20:46.867 } 00:20:46.867 }, 00:20:46.867 { 00:20:46.867 "method": "nvmf_subsystem_add_ns", 00:20:46.867 "params": { 00:20:46.867 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.867 "namespace": { 00:20:46.867 "nsid": 1, 00:20:46.867 "bdev_name": "malloc0", 00:20:46.867 "nguid": "09AA0A150A5B42238D54924DA20B0254", 00:20:46.867 "uuid": "09aa0a15-0a5b-4223-8d54-924da20b0254", 00:20:46.867 "no_auto_visible": false 00:20:46.867 } 00:20:46.867 } 00:20:46.867 }, 00:20:46.867 { 00:20:46.867 "method": "nvmf_subsystem_add_listener", 00:20:46.867 "params": { 00:20:46.867 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.867 "listen_address": { 00:20:46.867 "trtype": "TCP", 00:20:46.867 "adrfam": "IPv4", 00:20:46.867 "traddr": "10.0.0.2", 00:20:46.867 "trsvcid": "4420" 00:20:46.867 }, 00:20:46.867 "secure_channel": true 00:20:46.867 } 00:20:46.867 } 00:20:46.867 ] 00:20:46.867 } 00:20:46.867 ] 00:20:46.867 }' 00:20:46.867 06:46:51 -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:47.126 06:46:51 -- target/tls.sh@264 -- # bperfcfg='{ 00:20:47.126 "subsystems": [ 00:20:47.126 { 00:20:47.126 "subsystem": "keyring", 00:20:47.126 "config": [ 00:20:47.126 { 00:20:47.126 "method": "keyring_file_add_key", 00:20:47.126 "params": { 00:20:47.126 "name": "key0", 00:20:47.126 "path": "/tmp/tmp.9Mng7s5y3P" 00:20:47.126 } 00:20:47.126 } 00:20:47.126 ] 00:20:47.126 }, 00:20:47.126 { 00:20:47.126 "subsystem": "iobuf", 00:20:47.126 "config": [ 00:20:47.126 { 00:20:47.126 "method": "iobuf_set_options", 00:20:47.126 "params": { 00:20:47.126 "small_pool_count": 8192, 00:20:47.126 "large_pool_count": 1024, 00:20:47.126 "small_bufsize": 8192, 00:20:47.126 "large_bufsize": 135168 00:20:47.126 } 00:20:47.126 } 00:20:47.126 ] 00:20:47.126 }, 00:20:47.126 { 00:20:47.126 "subsystem": "sock", 00:20:47.126 "config": [ 00:20:47.126 { 00:20:47.126 "method": "sock_impl_set_options", 00:20:47.126 "params": { 00:20:47.126 "impl_name": "posix", 00:20:47.126 "recv_buf_size": 2097152, 00:20:47.126 "send_buf_size": 2097152, 00:20:47.126 "enable_recv_pipe": true, 00:20:47.126 "enable_quickack": false, 00:20:47.126 "enable_placement_id": 0, 00:20:47.126 "enable_zerocopy_send_server": true, 00:20:47.126 "enable_zerocopy_send_client": false, 00:20:47.126 "zerocopy_threshold": 0, 00:20:47.126 "tls_version": 0, 00:20:47.126 "enable_ktls": false 00:20:47.126 } 00:20:47.126 }, 00:20:47.126 { 00:20:47.126 "method": "sock_impl_set_options", 00:20:47.126 "params": { 00:20:47.126 "impl_name": "ssl", 00:20:47.126 "recv_buf_size": 4096, 00:20:47.126 "send_buf_size": 4096, 00:20:47.126 "enable_recv_pipe": true, 00:20:47.126 "enable_quickack": false, 00:20:47.126 "enable_placement_id": 0, 00:20:47.126 "enable_zerocopy_send_server": true, 00:20:47.126 "enable_zerocopy_send_client": false, 00:20:47.126 "zerocopy_threshold": 0, 00:20:47.126 "tls_version": 0, 00:20:47.126 "enable_ktls": false 00:20:47.126 } 00:20:47.126 } 00:20:47.126 ] 00:20:47.126 }, 00:20:47.126 { 00:20:47.126 "subsystem": "vmd", 00:20:47.126 "config": [] 00:20:47.126 }, 00:20:47.126 { 00:20:47.126 "subsystem": "accel", 00:20:47.126 "config": [ 00:20:47.126 { 00:20:47.126 "method": "accel_set_options", 00:20:47.126 "params": { 00:20:47.126 "small_cache_size": 128, 00:20:47.126 "large_cache_size": 16, 00:20:47.126 "task_count": 2048, 00:20:47.126 "sequence_count": 2048, 00:20:47.126 "buf_count": 2048 00:20:47.126 } 00:20:47.126 } 00:20:47.126 ] 00:20:47.126 }, 00:20:47.126 { 00:20:47.126 "subsystem": "bdev", 00:20:47.126 "config": [ 00:20:47.126 { 00:20:47.126 "method": "bdev_set_options", 00:20:47.126 "params": { 00:20:47.126 "bdev_io_pool_size": 65535, 00:20:47.126 "bdev_io_cache_size": 256, 00:20:47.126 "bdev_auto_examine": true, 00:20:47.126 "iobuf_small_cache_size": 128, 00:20:47.126 "iobuf_large_cache_size": 16 00:20:47.126 } 00:20:47.126 }, 00:20:47.126 { 00:20:47.126 "method": "bdev_raid_set_options", 00:20:47.126 "params": { 00:20:47.126 "process_window_size_kb": 1024 00:20:47.126 } 00:20:47.126 }, 00:20:47.126 { 00:20:47.126 "method": "bdev_iscsi_set_options", 00:20:47.126 "params": { 00:20:47.126 "timeout_sec": 30 00:20:47.126 } 00:20:47.126 }, 00:20:47.126 { 00:20:47.126 "method": "bdev_nvme_set_options", 00:20:47.126 "params": { 00:20:47.126 "action_on_timeout": "none", 00:20:47.127 "timeout_us": 0, 00:20:47.127 "timeout_admin_us": 0, 00:20:47.127 "keep_alive_timeout_ms": 10000, 00:20:47.127 "arbitration_burst": 0, 00:20:47.127 "low_priority_weight": 0, 00:20:47.127 "medium_priority_weight": 0, 00:20:47.127 "high_priority_weight": 0, 00:20:47.127 "nvme_adminq_poll_period_us": 10000, 00:20:47.127 "nvme_ioq_poll_period_us": 0, 00:20:47.127 "io_queue_requests": 512, 00:20:47.127 "delay_cmd_submit": true, 00:20:47.127 "transport_retry_count": 4, 00:20:47.127 "bdev_retry_count": 3, 00:20:47.127 "transport_ack_timeout": 0, 00:20:47.127 "ctrlr_loss_timeout_sec": 0, 00:20:47.127 "reconnect_delay_sec": 0, 00:20:47.127 "fast_io_fail_timeout_sec": 0, 00:20:47.127 "disable_auto_failback": false, 00:20:47.127 "generate_uuids": false, 00:20:47.127 "transport_tos": 0, 00:20:47.127 "nvme_error_stat": false, 00:20:47.127 "rdma_srq_size": 0, 00:20:47.127 "io_path_stat": false, 00:20:47.127 "allow_accel_sequence": false, 00:20:47.127 "rdma_max_cq_size": 0, 00:20:47.127 "rdma_cm_event_timeout_ms": 0, 00:20:47.127 "dhchap_digests": [ 00:20:47.127 "sha256", 00:20:47.127 "sha384", 00:20:47.127 "sha512" 00:20:47.127 ], 00:20:47.127 "dhchap_dhgroups": [ 00:20:47.127 "null", 00:20:47.127 "ffdhe2048", 00:20:47.127 "ffdhe3072", 00:20:47.127 "ffdhe4096", 00:20:47.127 "ffdhe6144", 00:20:47.127 "ffdhe8192" 00:20:47.127 ] 00:20:47.127 } 00:20:47.127 }, 00:20:47.127 { 00:20:47.127 "method": "bdev_nvme_attach_controller", 00:20:47.127 "params": { 00:20:47.127 "name": "nvme0", 00:20:47.127 "trtype": "TCP", 00:20:47.127 "adrfam": "IPv4", 00:20:47.127 "traddr": "10.0.0.2", 00:20:47.127 "trsvcid": "4420", 00:20:47.127 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.127 "prchk_reftag": false, 00:20:47.127 "prchk_guard": false, 00:20:47.127 "ctrlr_loss_timeout_sec": 0, 00:20:47.127 "reconnect_delay_sec": 0, 00:20:47.127 "fast_io_fail_timeout_sec": 0, 00:20:47.127 "psk": "key0", 00:20:47.127 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:47.127 "hdgst": false, 00:20:47.127 "ddgst": false 00:20:47.127 } 00:20:47.127 }, 00:20:47.127 { 00:20:47.127 "method": "bdev_nvme_set_hotplug", 00:20:47.127 "params": { 00:20:47.127 "period_us": 100000, 00:20:47.127 "enable": false 00:20:47.127 } 00:20:47.127 }, 00:20:47.127 { 00:20:47.127 "method": "bdev_enable_histogram", 00:20:47.127 "params": { 00:20:47.127 "name": "nvme0n1", 00:20:47.127 "enable": true 00:20:47.127 } 00:20:47.127 }, 00:20:47.127 { 00:20:47.127 "method": "bdev_wait_for_examine" 00:20:47.127 } 00:20:47.127 ] 00:20:47.127 }, 00:20:47.127 { 00:20:47.127 "subsystem": "nbd", 00:20:47.127 "config": [] 00:20:47.127 } 00:20:47.127 ] 00:20:47.127 }' 00:20:47.127 06:46:51 -- target/tls.sh@266 -- # killprocess 17442 00:20:47.127 06:46:51 -- common/autotest_common.sh@936 -- # '[' -z 17442 ']' 00:20:47.127 06:46:51 -- common/autotest_common.sh@940 -- # kill -0 17442 00:20:47.127 06:46:51 -- common/autotest_common.sh@941 -- # uname 00:20:47.127 06:46:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:47.127 06:46:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 17442 00:20:47.127 06:46:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:47.127 06:46:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:47.127 06:46:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 17442' 00:20:47.127 killing process with pid 17442 00:20:47.127 06:46:51 -- common/autotest_common.sh@955 -- # kill 17442 00:20:47.127 Received shutdown signal, test time was about 1.000000 seconds 00:20:47.127 00:20:47.127 Latency(us) 00:20:47.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.127 =================================================================================================================== 00:20:47.127 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:47.127 06:46:51 -- common/autotest_common.sh@960 -- # wait 17442 00:20:47.413 06:46:51 -- target/tls.sh@267 -- # killprocess 17303 00:20:47.413 06:46:51 -- common/autotest_common.sh@936 -- # '[' -z 17303 ']' 00:20:47.413 06:46:51 -- common/autotest_common.sh@940 -- # kill -0 17303 00:20:47.413 06:46:51 -- common/autotest_common.sh@941 -- # uname 00:20:47.413 06:46:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:47.413 06:46:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 17303 00:20:47.413 06:46:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:47.413 06:46:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:47.413 06:46:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 17303' 00:20:47.413 killing process with pid 17303 00:20:47.413 06:46:51 -- common/autotest_common.sh@955 -- # kill 17303 00:20:47.413 06:46:51 -- common/autotest_common.sh@960 -- # wait 17303 00:20:47.701 06:46:52 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:20:47.701 06:46:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:47.701 06:46:52 -- target/tls.sh@269 -- # echo '{ 00:20:47.701 "subsystems": [ 00:20:47.701 { 00:20:47.701 "subsystem": "keyring", 00:20:47.701 "config": [ 00:20:47.701 { 00:20:47.701 "method": "keyring_file_add_key", 00:20:47.701 "params": { 00:20:47.701 "name": "key0", 00:20:47.701 "path": "/tmp/tmp.9Mng7s5y3P" 00:20:47.701 } 00:20:47.701 } 00:20:47.701 ] 00:20:47.702 }, 00:20:47.702 { 00:20:47.702 "subsystem": "iobuf", 00:20:47.702 "config": [ 00:20:47.702 { 00:20:47.702 "method": "iobuf_set_options", 00:20:47.702 "params": { 00:20:47.702 "small_pool_count": 8192, 00:20:47.702 "large_pool_count": 1024, 00:20:47.702 "small_bufsize": 8192, 00:20:47.702 "large_bufsize": 135168 00:20:47.702 } 00:20:47.702 } 00:20:47.702 ] 00:20:47.702 }, 00:20:47.702 { 00:20:47.702 "subsystem": "sock", 00:20:47.702 "config": [ 00:20:47.702 { 00:20:47.702 "method": "sock_impl_set_options", 00:20:47.702 "params": { 00:20:47.702 "impl_name": "posix", 00:20:47.702 "recv_buf_size": 2097152, 00:20:47.702 "send_buf_size": 2097152, 00:20:47.702 "enable_recv_pipe": true, 00:20:47.702 "enable_quickack": false, 00:20:47.702 "enable_placement_id": 0, 00:20:47.702 "enable_zerocopy_send_server": true, 00:20:47.702 "enable_zerocopy_send_client": false, 00:20:47.702 "zerocopy_threshold": 0, 00:20:47.702 "tls_version": 0, 00:20:47.702 "enable_ktls": false 00:20:47.702 } 00:20:47.702 }, 00:20:47.702 { 00:20:47.702 "method": "sock_impl_set_options", 00:20:47.702 "params": { 00:20:47.702 "impl_name": "ssl", 00:20:47.702 "recv_buf_size": 4096, 00:20:47.702 "send_buf_size": 4096, 00:20:47.702 "enable_recv_pipe": true, 00:20:47.702 "enable_quickack": false, 00:20:47.702 "enable_placement_id": 0, 00:20:47.702 "enable_zerocopy_send_server": true, 00:20:47.702 "enable_zerocopy_send_client": false, 00:20:47.702 "zerocopy_threshold": 0, 00:20:47.702 "tls_version": 0, 00:20:47.702 "enable_ktls": false 00:20:47.702 } 00:20:47.702 } 00:20:47.702 ] 00:20:47.702 }, 00:20:47.702 { 00:20:47.702 "subsystem": "vmd", 00:20:47.702 "config": [] 00:20:47.702 }, 00:20:47.702 { 00:20:47.702 "subsystem": "accel", 00:20:47.702 "config": [ 00:20:47.702 { 00:20:47.702 "method": "accel_set_options", 00:20:47.702 "params": { 00:20:47.702 "small_cache_size": 128, 00:20:47.702 "large_cache_size": 16, 00:20:47.702 "task_count": 2048, 00:20:47.702 "sequence_count": 2048, 00:20:47.702 "buf_count": 2048 00:20:47.702 } 00:20:47.702 } 00:20:47.702 ] 00:20:47.702 }, 00:20:47.702 { 00:20:47.702 "subsystem": "bdev", 00:20:47.702 "config": [ 00:20:47.702 { 00:20:47.702 "method": "bdev_set_options", 00:20:47.702 "params": { 00:20:47.702 "bdev_io_pool_size": 65535, 00:20:47.702 "bdev_io_cache_size": 256, 00:20:47.702 "bdev_auto_examine": true, 00:20:47.702 "iobuf_small_cache_size": 128, 00:20:47.702 "iobuf_large_cache_size": 16 00:20:47.702 } 00:20:47.702 }, 00:20:47.702 { 00:20:47.702 "method": "bdev_raid_set_options", 00:20:47.702 "params": { 00:20:47.702 "process_window_size_kb": 1024 00:20:47.702 } 00:20:47.702 }, 00:20:47.702 { 00:20:47.702 "method": "bdev_iscsi_set_options", 00:20:47.702 "params": { 00:20:47.702 "timeout_sec": 30 00:20:47.702 } 00:20:47.702 }, 00:20:47.702 { 00:20:47.702 "method": "bdev_nvme_set_options", 00:20:47.702 "params": { 00:20:47.702 "action_on_timeout": "none", 00:20:47.702 "timeout_us": 0, 00:20:47.702 "timeout_admin_us": 0, 00:20:47.702 "keep_alive_timeout_ms": 10000, 00:20:47.702 "arbitration_burst": 0, 00:20:47.702 "low_priority_weight": 0, 00:20:47.702 "medium_priority_weight": 0, 00:20:47.702 "high_priority_weight": 0, 00:20:47.702 "nvme_adminq_poll_period_us": 10000, 00:20:47.702 "nvme_ioq_poll_period_us": 0, 00:20:47.702 "io_queue_requests": 0, 00:20:47.702 "delay_cmd_submit": true, 00:20:47.702 "transport_retry_count": 4, 00:20:47.702 "bdev_retry_count": 3, 00:20:47.702 "transport_ack_timeout": 0, 00:20:47.702 "ctrlr_loss_timeout_sec": 0, 00:20:47.702 "reconnect_delay_sec": 0, 00:20:47.702 "fast_io_fail_timeout_sec": 0, 00:20:47.702 "disable_auto_failback": false, 00:20:47.702 "generate_uuids": false, 00:20:47.702 "transport_tos": 0, 00:20:47.702 "nvme_error_stat": false, 00:20:47.702 "rdma_srq_size": 0, 00:20:47.702 "io_path_stat": false, 00:20:47.702 "allow_accel_sequence": false, 00:20:47.702 "rdma_max_cq_size": 0, 00:20:47.702 "rdma_cm_event_timeout_ms": 0, 00:20:47.702 "dhchap_digests": [ 00:20:47.702 "sha256", 00:20:47.702 "sha384", 00:20:47.702 "sha512" 00:20:47.702 ], 00:20:47.702 "dhchap_dhgroups": [ 00:20:47.702 "null", 00:20:47.702 "ffdhe2048", 00:20:47.702 "ffdhe3072", 00:20:47.702 "ffdhe4096", 00:20:47.702 "ffdhe6144", 00:20:47.702 "ffdhe8192" 00:20:47.702 ] 00:20:47.702 } 00:20:47.702 }, 00:20:47.702 { 00:20:47.702 "method": "bdev_nvme_set_hotplug", 00:20:47.702 "params": { 00:20:47.702 "period_us": 100000, 00:20:47.702 "enable": false 00:20:47.702 } 00:20:47.702 }, 00:20:47.702 { 00:20:47.702 "method": "bdev_malloc_create", 00:20:47.702 "params": { 00:20:47.702 "name": "malloc0", 00:20:47.702 "num_blocks": 8192, 00:20:47.702 "block_size": 4096, 00:20:47.702 "physical_block_size": 4096, 00:20:47.702 "uuid": "09aa0a15-0a5b-4223-8d54-924da20b0254", 00:20:47.702 "optimal_io_boundary": 0 00:20:47.702 } 00:20:47.702 }, 00:20:47.702 { 00:20:47.702 "method": "bdev_wait_for_examine" 00:20:47.702 } 00:20:47.702 ] 00:20:47.702 }, 00:20:47.702 { 00:20:47.702 "subsystem": "nbd", 00:20:47.702 "config": [] 00:20:47.702 }, 00:20:47.702 { 00:20:47.702 "subsystem": "scheduler", 00:20:47.702 "config": [ 00:20:47.702 { 00:20:47.702 "method": "framework_set_scheduler", 00:20:47.702 "params": { 00:20:47.702 "name": "static" 00:20:47.702 } 00:20:47.702 } 00:20:47.702 ] 00:20:47.702 }, 00:20:47.702 { 00:20:47.702 "subsystem": "nvmf", 00:20:47.702 "config": [ 00:20:47.702 { 00:20:47.702 "method": "nvmf_set_config", 00:20:47.702 "params": { 00:20:47.702 "discovery_filter": "match_any", 00:20:47.702 "admin_cmd_passthru": { 00:20:47.702 "identify_ctrlr": false 00:20:47.702 } 00:20:47.702 } 00:20:47.702 }, 00:20:47.702 { 00:20:47.702 "method": "nvmf_set_max_subsystems", 00:20:47.702 "params": { 00:20:47.702 "max_subsystems": 1024 00:20:47.702 } 00:20:47.702 }, 00:20:47.702 { 00:20:47.702 "method": "nvmf_set_crdt", 00:20:47.702 "params": { 00:20:47.702 "crdt1": 0, 00:20:47.702 "crdt2": 0, 00:20:47.702 "crdt3": 0 00:20:47.702 } 00:20:47.702 }, 00:20:47.702 { 00:20:47.702 "method": "nvmf_create_transport", 00:20:47.702 "params": { 00:20:47.702 "trtype": "TCP", 00:20:47.702 "max_queue_depth": 128, 00:20:47.702 "max_io_qpairs_per_ctrlr": 127, 00:20:47.702 "in_capsule_data_size": 4096, 00:20:47.702 "max_io_size": 131072, 00:20:47.702 "io_unit_size": 131072, 00:20:47.702 "max_aq_depth": 128, 00:20:47.702 "num_shared_buffers": 511, 00:20:47.702 "buf_cache_size": 4294967295, 00:20:47.702 "dif_insert_or_strip": false, 00:20:47.702 "zcopy": false, 00:20:47.702 "c2h_success": false, 00:20:47.702 "sock_priority": 0, 00:20:47.702 "abort_timeout_sec": 1, 00:20:47.702 "ack_timeout": 0 00:20:47.702 } 00:20:47.702 }, 00:20:47.702 { 00:20:47.702 "method": "nvmf_create_subsystem", 00:20:47.702 "params": { 00:20:47.702 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.702 "allow_any_host": false, 00:20:47.702 "serial_number": "00000000000000000000", 00:20:47.702 "model_number": "SPDK bdev Controller", 00:20:47.702 "max_namespaces": 32, 00:20:47.702 "min_cntlid": 1, 00:20:47.702 "max_cntlid": 65519, 00:20:47.702 "ana_reporting": false 00:20:47.702 } 00:20:47.702 }, 00:20:47.702 { 00:20:47.702 "method": "nvmf_subsystem_add_host", 00:20:47.702 "params": { 00:20:47.702 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.702 "host": "nqn.2016-06.io.spdk:host1", 00:20:47.702 "psk": "key0" 00:20:47.702 } 00:20:47.702 }, 00:20:47.702 { 00:20:47.702 "method": "nvmf_subsystem_add_ns", 00:20:47.702 "params": { 00:20:47.702 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.702 "namespace": { 00:20:47.702 "nsid": 1, 00:20:47.702 "bdev_name": "malloc0", 00:20:47.702 "nguid": "09AA0A150A5B42238D54924DA20B0254", 00:20:47.702 "uuid": "09aa0a15-0a5b-4223-8d54-924da20b0254", 00:20:47.702 "no_auto_visible": false 00:20:47.702 } 00:20:47.702 } 00:20:47.702 }, 00:20:47.702 { 00:20:47.702 "method": "nvmf_subsystem_add_listener", 00:20:47.702 "params": { 00:20:47.702 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.702 "listen_address": { 00:20:47.702 "trtype": "TCP", 00:20:47.702 "adrfam": "IPv4", 00:20:47.702 "traddr": "10.0.0.2", 00:20:47.702 "trsvcid": "4420" 00:20:47.702 }, 00:20:47.702 "secure_channel": true 00:20:47.702 } 00:20:47.702 } 00:20:47.702 ] 00:20:47.702 } 00:20:47.702 ] 00:20:47.702 }' 00:20:47.702 06:46:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:47.702 06:46:52 -- common/autotest_common.sh@10 -- # set +x 00:20:47.702 06:46:52 -- nvmf/common.sh@470 -- # nvmfpid=17734 00:20:47.702 06:46:52 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:47.702 06:46:52 -- nvmf/common.sh@471 -- # waitforlisten 17734 00:20:47.702 06:46:52 -- common/autotest_common.sh@817 -- # '[' -z 17734 ']' 00:20:47.702 06:46:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.702 06:46:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:47.703 06:46:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.703 06:46:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:47.703 06:46:52 -- common/autotest_common.sh@10 -- # set +x 00:20:47.703 [2024-04-17 06:46:52.158004] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:20:47.703 [2024-04-17 06:46:52.158097] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.703 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.703 [2024-04-17 06:46:52.226564] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.960 [2024-04-17 06:46:52.312459] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.960 [2024-04-17 06:46:52.312528] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.960 [2024-04-17 06:46:52.312569] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:47.960 [2024-04-17 06:46:52.312581] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:47.960 [2024-04-17 06:46:52.312591] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.960 [2024-04-17 06:46:52.312658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.960 [2024-04-17 06:46:52.548532] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:48.218 [2024-04-17 06:46:52.580551] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:48.218 [2024-04-17 06:46:52.590394] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:48.784 06:46:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:48.784 06:46:53 -- common/autotest_common.sh@850 -- # return 0 00:20:48.784 06:46:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:48.784 06:46:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:48.784 06:46:53 -- common/autotest_common.sh@10 -- # set +x 00:20:48.784 06:46:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.784 06:46:53 -- target/tls.sh@272 -- # bdevperf_pid=17888 00:20:48.784 06:46:53 -- target/tls.sh@273 -- # waitforlisten 17888 /var/tmp/bdevperf.sock 00:20:48.784 06:46:53 -- common/autotest_common.sh@817 -- # '[' -z 17888 ']' 00:20:48.784 06:46:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:48.784 06:46:53 -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:48.784 06:46:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:48.784 06:46:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:48.784 06:46:53 -- target/tls.sh@270 -- # echo '{ 00:20:48.784 "subsystems": [ 00:20:48.784 { 00:20:48.784 "subsystem": "keyring", 00:20:48.784 "config": [ 00:20:48.784 { 00:20:48.784 "method": "keyring_file_add_key", 00:20:48.784 "params": { 00:20:48.784 "name": "key0", 00:20:48.784 "path": "/tmp/tmp.9Mng7s5y3P" 00:20:48.784 } 00:20:48.784 } 00:20:48.784 ] 00:20:48.784 }, 00:20:48.784 { 00:20:48.784 "subsystem": "iobuf", 00:20:48.784 "config": [ 00:20:48.784 { 00:20:48.784 "method": "iobuf_set_options", 00:20:48.784 "params": { 00:20:48.784 "small_pool_count": 8192, 00:20:48.784 "large_pool_count": 1024, 00:20:48.784 "small_bufsize": 8192, 00:20:48.784 "large_bufsize": 135168 00:20:48.784 } 00:20:48.784 } 00:20:48.784 ] 00:20:48.784 }, 00:20:48.784 { 00:20:48.784 "subsystem": "sock", 00:20:48.784 "config": [ 00:20:48.784 { 00:20:48.784 "method": "sock_impl_set_options", 00:20:48.784 "params": { 00:20:48.784 "impl_name": "posix", 00:20:48.784 "recv_buf_size": 2097152, 00:20:48.784 "send_buf_size": 2097152, 00:20:48.784 "enable_recv_pipe": true, 00:20:48.784 "enable_quickack": false, 00:20:48.784 "enable_placement_id": 0, 00:20:48.784 "enable_zerocopy_send_server": true, 00:20:48.784 "enable_zerocopy_send_client": false, 00:20:48.784 "zerocopy_threshold": 0, 00:20:48.784 "tls_version": 0, 00:20:48.784 "enable_ktls": false 00:20:48.784 } 00:20:48.784 }, 00:20:48.784 { 00:20:48.784 "method": "sock_impl_set_options", 00:20:48.784 "params": { 00:20:48.784 "impl_name": "ssl", 00:20:48.784 "recv_buf_size": 4096, 00:20:48.784 "send_buf_size": 4096, 00:20:48.784 "enable_recv_pipe": true, 00:20:48.784 "enable_quickack": false, 00:20:48.784 "enable_placement_id": 0, 00:20:48.784 "enable_zerocopy_send_server": true, 00:20:48.784 "enable_zerocopy_send_client": false, 00:20:48.784 "zerocopy_threshold": 0, 00:20:48.784 "tls_version": 0, 00:20:48.784 "enable_ktls": false 00:20:48.784 } 00:20:48.784 } 00:20:48.784 ] 00:20:48.784 }, 00:20:48.784 { 00:20:48.784 "subsystem": "vmd", 00:20:48.784 "config": [] 00:20:48.784 }, 00:20:48.784 { 00:20:48.784 "subsystem": "accel", 00:20:48.784 "config": [ 00:20:48.784 { 00:20:48.784 "method": "accel_set_options", 00:20:48.784 "params": { 00:20:48.784 "small_cache_size": 128, 00:20:48.784 "large_cache_size": 16, 00:20:48.784 "task_count": 2048, 00:20:48.784 "sequence_count": 2048, 00:20:48.784 "buf_count": 2048 00:20:48.784 } 00:20:48.784 } 00:20:48.784 ] 00:20:48.784 }, 00:20:48.784 { 00:20:48.784 "subsystem": "bdev", 00:20:48.784 "config": [ 00:20:48.784 { 00:20:48.784 "method": "bdev_set_options", 00:20:48.784 "params": { 00:20:48.784 "bdev_io_pool_size": 65535, 00:20:48.784 "bdev_io_cache_size": 256, 00:20:48.784 "bdev_auto_examine": true, 00:20:48.784 "iobuf_small_cache_size": 128, 00:20:48.784 "iobuf_large_cache_size": 16 00:20:48.784 } 00:20:48.784 }, 00:20:48.784 { 00:20:48.784 "method": "bdev_raid_set_options", 00:20:48.784 "params": { 00:20:48.784 "process_window_size_kb": 1024 00:20:48.784 } 00:20:48.784 }, 00:20:48.784 { 00:20:48.784 "method": "bdev_iscsi_set_options", 00:20:48.784 "params": { 00:20:48.784 "timeout_sec": 30 00:20:48.784 } 00:20:48.784 }, 00:20:48.784 { 00:20:48.784 "method": "bdev_nvme_set_options", 00:20:48.784 "params": { 00:20:48.784 "action_on_timeout": "none", 00:20:48.784 "timeout_us": 0, 00:20:48.785 "timeout_admin_us": 0, 00:20:48.785 "keep_alive_timeout_ms": 10000, 00:20:48.785 "arbitration_burst": 0, 00:20:48.785 "low_priority_weight": 0, 00:20:48.785 "medium_priority_weight": 0, 00:20:48.785 "high_priority_weight": 0, 00:20:48.785 "nvme_adminq_poll_period_us": 10000, 00:20:48.785 "nvme_ioq_poll_period_us": 0, 00:20:48.785 "io_queue_requests": 512, 00:20:48.785 "delay_cmd_submit": true, 00:20:48.785 "transport_retry_count": 4, 00:20:48.785 "bdev_retry_count": 3, 00:20:48.785 "transport_ack_timeout": 0, 00:20:48.785 "ctrlr_loss_timeout_sec": 0, 00:20:48.785 "reconnect_delay_sec": 0, 00:20:48.785 "fast_io_fail_timeout_sec": 0, 00:20:48.785 "disable_auto_failback": false, 00:20:48.785 "generate_uuids": false, 00:20:48.785 "transport_tos": 0, 00:20:48.785 "nvme_error_stat": false, 00:20:48.785 "rdma_srq_size": 0, 00:20:48.785 "io_path_stat": false, 00:20:48.785 "allow_accel_sequence": false, 00:20:48.785 "rdma_max_cq_size": 0, 00:20:48.785 "rdma_cm_event_timeout_ms": 0, 00:20:48.785 "dhchap_digests": [ 00:20:48.785 "sha256", 00:20:48.785 "sha384", 00:20:48.785 "sha512" 00:20:48.785 ], 00:20:48.785 "dhchap_dhgroups": [ 00:20:48.785 "null", 00:20:48.785 "ffdhe2048", 00:20:48.785 "ffdhe3072", 00:20:48.785 "ffdhe4Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:48.785 096", 00:20:48.785 "ffdhe6144", 00:20:48.785 "ffdhe8192" 00:20:48.785 ] 00:20:48.785 } 00:20:48.785 }, 00:20:48.785 { 00:20:48.785 "method": "bdev_nvme_attach_controller", 00:20:48.785 "params": { 00:20:48.785 "name": "nvme0", 00:20:48.785 "trtype": "TCP", 00:20:48.785 "adrfam": "IPv4", 00:20:48.785 "traddr": "10.0.0.2", 00:20:48.785 "trsvcid": "4420", 00:20:48.785 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.785 "prchk_reftag": false, 00:20:48.785 "prchk_guard": false, 00:20:48.785 "ctrlr_loss_timeout_sec": 0, 00:20:48.785 "reconnect_delay_sec": 0, 00:20:48.785 "fast_io_fail_timeout_sec": 0, 00:20:48.785 "psk": "key0", 00:20:48.785 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:48.785 "hdgst": false, 00:20:48.785 "ddgst": false 00:20:48.785 } 00:20:48.785 }, 00:20:48.785 { 00:20:48.785 "method": "bdev_nvme_set_hotplug", 00:20:48.785 "params": { 00:20:48.785 "period_us": 100000, 00:20:48.785 "enable": false 00:20:48.785 } 00:20:48.785 }, 00:20:48.785 { 00:20:48.785 "method": "bdev_enable_histogram", 00:20:48.785 "params": { 00:20:48.785 "name": "nvme0n1", 00:20:48.785 "enable": true 00:20:48.785 } 00:20:48.785 }, 00:20:48.785 { 00:20:48.785 "method": "bdev_wait_for_examine" 00:20:48.785 } 00:20:48.785 ] 00:20:48.785 }, 00:20:48.785 { 00:20:48.785 "subsystem": "nbd", 00:20:48.785 "config": [] 00:20:48.785 } 00:20:48.785 ] 00:20:48.785 }' 00:20:48.785 06:46:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:48.785 06:46:53 -- common/autotest_common.sh@10 -- # set +x 00:20:48.785 [2024-04-17 06:46:53.163909] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:20:48.785 [2024-04-17 06:46:53.164016] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid17888 ] 00:20:48.785 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.785 [2024-04-17 06:46:53.226772] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.785 [2024-04-17 06:46:53.316327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.043 [2024-04-17 06:46:53.490779] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:49.608 06:46:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:49.608 06:46:54 -- common/autotest_common.sh@850 -- # return 0 00:20:49.608 06:46:54 -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:49.608 06:46:54 -- target/tls.sh@275 -- # jq -r '.[].name' 00:20:49.865 06:46:54 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.865 06:46:54 -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:49.865 Running I/O for 1 seconds... 00:20:51.237 00:20:51.237 Latency(us) 00:20:51.237 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.237 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:51.237 Verification LBA range: start 0x0 length 0x2000 00:20:51.237 nvme0n1 : 1.05 2448.68 9.57 0.00 0.00 51202.34 10291.58 85051.16 00:20:51.237 =================================================================================================================== 00:20:51.237 Total : 2448.68 9.57 0.00 0.00 51202.34 10291.58 85051.16 00:20:51.237 0 00:20:51.238 06:46:55 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:20:51.238 06:46:55 -- target/tls.sh@279 -- # cleanup 00:20:51.238 06:46:55 -- target/tls.sh@15 -- # process_shm --id 0 00:20:51.238 06:46:55 -- common/autotest_common.sh@794 -- # type=--id 00:20:51.238 06:46:55 -- common/autotest_common.sh@795 -- # id=0 00:20:51.238 06:46:55 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:20:51.238 06:46:55 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:51.238 06:46:55 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:20:51.238 06:46:55 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:20:51.238 06:46:55 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:20:51.238 06:46:55 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:51.238 nvmf_trace.0 00:20:51.238 06:46:55 -- common/autotest_common.sh@809 -- # return 0 00:20:51.238 06:46:55 -- target/tls.sh@16 -- # killprocess 17888 00:20:51.238 06:46:55 -- common/autotest_common.sh@936 -- # '[' -z 17888 ']' 00:20:51.238 06:46:55 -- common/autotest_common.sh@940 -- # kill -0 17888 00:20:51.238 06:46:55 -- common/autotest_common.sh@941 -- # uname 00:20:51.238 06:46:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:51.238 06:46:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 17888 00:20:51.238 06:46:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:51.238 06:46:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:51.238 06:46:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 17888' 00:20:51.238 killing process with pid 17888 00:20:51.238 06:46:55 -- common/autotest_common.sh@955 -- # kill 17888 00:20:51.238 Received shutdown signal, test time was about 1.000000 seconds 00:20:51.238 00:20:51.238 Latency(us) 00:20:51.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.238 =================================================================================================================== 00:20:51.238 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:51.238 06:46:55 -- common/autotest_common.sh@960 -- # wait 17888 00:20:51.496 06:46:55 -- target/tls.sh@17 -- # nvmftestfini 00:20:51.496 06:46:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:51.496 06:46:55 -- nvmf/common.sh@117 -- # sync 00:20:51.496 06:46:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:51.496 06:46:55 -- nvmf/common.sh@120 -- # set +e 00:20:51.496 06:46:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:51.496 06:46:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:51.496 rmmod nvme_tcp 00:20:51.496 rmmod nvme_fabrics 00:20:51.496 rmmod nvme_keyring 00:20:51.496 06:46:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:51.496 06:46:55 -- nvmf/common.sh@124 -- # set -e 00:20:51.496 06:46:55 -- nvmf/common.sh@125 -- # return 0 00:20:51.496 06:46:55 -- nvmf/common.sh@478 -- # '[' -n 17734 ']' 00:20:51.496 06:46:55 -- nvmf/common.sh@479 -- # killprocess 17734 00:20:51.496 06:46:55 -- common/autotest_common.sh@936 -- # '[' -z 17734 ']' 00:20:51.496 06:46:55 -- common/autotest_common.sh@940 -- # kill -0 17734 00:20:51.496 06:46:55 -- common/autotest_common.sh@941 -- # uname 00:20:51.496 06:46:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:51.496 06:46:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 17734 00:20:51.496 06:46:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:51.496 06:46:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:51.496 06:46:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 17734' 00:20:51.496 killing process with pid 17734 00:20:51.496 06:46:55 -- common/autotest_common.sh@955 -- # kill 17734 00:20:51.496 06:46:55 -- common/autotest_common.sh@960 -- # wait 17734 00:20:51.754 06:46:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:51.754 06:46:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:51.754 06:46:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:51.754 06:46:56 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:51.754 06:46:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:51.754 06:46:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.754 06:46:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:51.754 06:46:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.656 06:46:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:53.656 06:46:58 -- target/tls.sh@18 -- # rm -f /tmp/tmp.2pLuAhnjXI /tmp/tmp.PQMv31NpnP /tmp/tmp.9Mng7s5y3P 00:20:53.656 00:20:53.656 real 1m18.467s 00:20:53.656 user 1m53.744s 00:20:53.656 sys 0m29.683s 00:20:53.656 06:46:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:53.656 06:46:58 -- common/autotest_common.sh@10 -- # set +x 00:20:53.656 ************************************ 00:20:53.656 END TEST nvmf_tls 00:20:53.656 ************************************ 00:20:53.656 06:46:58 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:53.656 06:46:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:53.656 06:46:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:53.656 06:46:58 -- common/autotest_common.sh@10 -- # set +x 00:20:53.915 ************************************ 00:20:53.915 START TEST nvmf_fips 00:20:53.915 ************************************ 00:20:53.915 06:46:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:53.915 * Looking for test storage... 00:20:53.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:53.915 06:46:58 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:53.915 06:46:58 -- nvmf/common.sh@7 -- # uname -s 00:20:53.915 06:46:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:53.915 06:46:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:53.915 06:46:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:53.915 06:46:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:53.915 06:46:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:53.915 06:46:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:53.915 06:46:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:53.915 06:46:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:53.915 06:46:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:53.915 06:46:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:53.915 06:46:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.915 06:46:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.915 06:46:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:53.915 06:46:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:53.915 06:46:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:53.915 06:46:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:53.915 06:46:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:53.915 06:46:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:53.915 06:46:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:53.915 06:46:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:53.915 06:46:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.915 06:46:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.915 06:46:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.915 06:46:58 -- paths/export.sh@5 -- # export PATH 00:20:53.915 06:46:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.915 06:46:58 -- nvmf/common.sh@47 -- # : 0 00:20:53.915 06:46:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:53.915 06:46:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:53.915 06:46:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:53.915 06:46:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:53.915 06:46:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:53.915 06:46:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:53.915 06:46:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:53.915 06:46:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:53.915 06:46:58 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:53.915 06:46:58 -- fips/fips.sh@89 -- # check_openssl_version 00:20:53.915 06:46:58 -- fips/fips.sh@83 -- # local target=3.0.0 00:20:53.915 06:46:58 -- fips/fips.sh@85 -- # openssl version 00:20:53.915 06:46:58 -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:53.915 06:46:58 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:53.915 06:46:58 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:53.915 06:46:58 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:53.915 06:46:58 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:53.915 06:46:58 -- scripts/common.sh@333 -- # IFS=.-: 00:20:53.915 06:46:58 -- scripts/common.sh@333 -- # read -ra ver1 00:20:53.915 06:46:58 -- scripts/common.sh@334 -- # IFS=.-: 00:20:53.915 06:46:58 -- scripts/common.sh@334 -- # read -ra ver2 00:20:53.915 06:46:58 -- scripts/common.sh@335 -- # local 'op=>=' 00:20:53.915 06:46:58 -- scripts/common.sh@337 -- # ver1_l=3 00:20:53.915 06:46:58 -- scripts/common.sh@338 -- # ver2_l=3 00:20:53.915 06:46:58 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:53.915 06:46:58 -- scripts/common.sh@341 -- # case "$op" in 00:20:53.915 06:46:58 -- scripts/common.sh@345 -- # : 1 00:20:53.915 06:46:58 -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:53.915 06:46:58 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:53.915 06:46:58 -- scripts/common.sh@362 -- # decimal 3 00:20:53.915 06:46:58 -- scripts/common.sh@350 -- # local d=3 00:20:53.915 06:46:58 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:53.915 06:46:58 -- scripts/common.sh@352 -- # echo 3 00:20:53.915 06:46:58 -- scripts/common.sh@362 -- # ver1[v]=3 00:20:53.915 06:46:58 -- scripts/common.sh@363 -- # decimal 3 00:20:53.915 06:46:58 -- scripts/common.sh@350 -- # local d=3 00:20:53.915 06:46:58 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:53.915 06:46:58 -- scripts/common.sh@352 -- # echo 3 00:20:53.915 06:46:58 -- scripts/common.sh@363 -- # ver2[v]=3 00:20:53.915 06:46:58 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:53.915 06:46:58 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:53.915 06:46:58 -- scripts/common.sh@361 -- # (( v++ )) 00:20:53.915 06:46:58 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:53.915 06:46:58 -- scripts/common.sh@362 -- # decimal 0 00:20:53.915 06:46:58 -- scripts/common.sh@350 -- # local d=0 00:20:53.915 06:46:58 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:53.915 06:46:58 -- scripts/common.sh@352 -- # echo 0 00:20:53.915 06:46:58 -- scripts/common.sh@362 -- # ver1[v]=0 00:20:53.915 06:46:58 -- scripts/common.sh@363 -- # decimal 0 00:20:53.915 06:46:58 -- scripts/common.sh@350 -- # local d=0 00:20:53.915 06:46:58 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:53.915 06:46:58 -- scripts/common.sh@352 -- # echo 0 00:20:53.915 06:46:58 -- scripts/common.sh@363 -- # ver2[v]=0 00:20:53.915 06:46:58 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:53.915 06:46:58 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:53.915 06:46:58 -- scripts/common.sh@361 -- # (( v++ )) 00:20:53.915 06:46:58 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:53.915 06:46:58 -- scripts/common.sh@362 -- # decimal 9 00:20:53.915 06:46:58 -- scripts/common.sh@350 -- # local d=9 00:20:53.915 06:46:58 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:53.915 06:46:58 -- scripts/common.sh@352 -- # echo 9 00:20:53.915 06:46:58 -- scripts/common.sh@362 -- # ver1[v]=9 00:20:53.915 06:46:58 -- scripts/common.sh@363 -- # decimal 0 00:20:53.915 06:46:58 -- scripts/common.sh@350 -- # local d=0 00:20:53.915 06:46:58 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:53.915 06:46:58 -- scripts/common.sh@352 -- # echo 0 00:20:53.915 06:46:58 -- scripts/common.sh@363 -- # ver2[v]=0 00:20:53.915 06:46:58 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:53.915 06:46:58 -- scripts/common.sh@364 -- # return 0 00:20:53.915 06:46:58 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:53.915 06:46:58 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:53.915 06:46:58 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:53.915 06:46:58 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:53.915 06:46:58 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:53.915 06:46:58 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:53.915 06:46:58 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:53.915 06:46:58 -- fips/fips.sh@113 -- # build_openssl_config 00:20:53.915 06:46:58 -- fips/fips.sh@37 -- # cat 00:20:53.915 06:46:58 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:53.915 06:46:58 -- fips/fips.sh@58 -- # cat - 00:20:53.915 06:46:58 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:53.915 06:46:58 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:53.915 06:46:58 -- fips/fips.sh@116 -- # mapfile -t providers 00:20:53.915 06:46:58 -- fips/fips.sh@116 -- # openssl list -providers 00:20:53.915 06:46:58 -- fips/fips.sh@116 -- # grep name 00:20:53.915 06:46:58 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:53.915 06:46:58 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:53.915 06:46:58 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:53.915 06:46:58 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:53.915 06:46:58 -- fips/fips.sh@127 -- # : 00:20:53.915 06:46:58 -- common/autotest_common.sh@638 -- # local es=0 00:20:53.915 06:46:58 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:53.915 06:46:58 -- common/autotest_common.sh@626 -- # local arg=openssl 00:20:53.916 06:46:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:53.916 06:46:58 -- common/autotest_common.sh@630 -- # type -t openssl 00:20:53.916 06:46:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:53.916 06:46:58 -- common/autotest_common.sh@632 -- # type -P openssl 00:20:53.916 06:46:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:53.916 06:46:58 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:20:53.916 06:46:58 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:20:53.916 06:46:58 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:20:54.174 Error setting digest 00:20:54.174 00F210D3CF7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:54.174 00F210D3CF7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:54.174 06:46:58 -- common/autotest_common.sh@641 -- # es=1 00:20:54.174 06:46:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:54.174 06:46:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:54.174 06:46:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:54.174 06:46:58 -- fips/fips.sh@130 -- # nvmftestinit 00:20:54.174 06:46:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:54.174 06:46:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:54.174 06:46:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:54.174 06:46:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:54.174 06:46:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:54.174 06:46:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.174 06:46:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:54.174 06:46:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.174 06:46:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:54.174 06:46:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:54.174 06:46:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:54.174 06:46:58 -- common/autotest_common.sh@10 -- # set +x 00:20:56.073 06:47:00 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:56.073 06:47:00 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:56.073 06:47:00 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:56.073 06:47:00 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:56.073 06:47:00 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:56.073 06:47:00 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:56.073 06:47:00 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:56.073 06:47:00 -- nvmf/common.sh@295 -- # net_devs=() 00:20:56.073 06:47:00 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:56.073 06:47:00 -- nvmf/common.sh@296 -- # e810=() 00:20:56.073 06:47:00 -- nvmf/common.sh@296 -- # local -ga e810 00:20:56.073 06:47:00 -- nvmf/common.sh@297 -- # x722=() 00:20:56.073 06:47:00 -- nvmf/common.sh@297 -- # local -ga x722 00:20:56.073 06:47:00 -- nvmf/common.sh@298 -- # mlx=() 00:20:56.073 06:47:00 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:56.073 06:47:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:56.073 06:47:00 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:56.073 06:47:00 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:56.073 06:47:00 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:56.073 06:47:00 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:56.073 06:47:00 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:56.073 06:47:00 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:56.073 06:47:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:56.073 06:47:00 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:56.073 06:47:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:56.074 06:47:00 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:56.074 06:47:00 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:56.074 06:47:00 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:56.074 06:47:00 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:56.074 06:47:00 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:56.074 06:47:00 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:56.074 06:47:00 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:56.074 06:47:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:56.074 06:47:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:56.074 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:56.074 06:47:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:56.074 06:47:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:56.074 06:47:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.074 06:47:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.074 06:47:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:56.074 06:47:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:56.074 06:47:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:56.074 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:56.074 06:47:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:56.074 06:47:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:56.074 06:47:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.074 06:47:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.074 06:47:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:56.074 06:47:00 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:56.074 06:47:00 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:56.074 06:47:00 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:56.074 06:47:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:56.074 06:47:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.074 06:47:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:56.074 06:47:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.074 06:47:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:56.074 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:56.074 06:47:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.074 06:47:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:56.074 06:47:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.074 06:47:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:56.074 06:47:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.074 06:47:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:56.074 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:56.074 06:47:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.074 06:47:00 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:56.074 06:47:00 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:56.074 06:47:00 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:56.074 06:47:00 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:56.074 06:47:00 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:56.074 06:47:00 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:56.074 06:47:00 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:56.074 06:47:00 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:56.074 06:47:00 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:56.074 06:47:00 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:56.074 06:47:00 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:56.074 06:47:00 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:56.074 06:47:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:56.074 06:47:00 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:56.074 06:47:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:56.074 06:47:00 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:56.074 06:47:00 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:56.074 06:47:00 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:56.074 06:47:00 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:56.074 06:47:00 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:56.074 06:47:00 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:56.074 06:47:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:56.074 06:47:00 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:56.074 06:47:00 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:56.074 06:47:00 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:56.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:56.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:20:56.074 00:20:56.074 --- 10.0.0.2 ping statistics --- 00:20:56.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.074 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:20:56.074 06:47:00 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:56.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:56.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:20:56.074 00:20:56.074 --- 10.0.0.1 ping statistics --- 00:20:56.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.074 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:20:56.074 06:47:00 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:56.074 06:47:00 -- nvmf/common.sh@411 -- # return 0 00:20:56.074 06:47:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:56.074 06:47:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:56.074 06:47:00 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:56.074 06:47:00 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:56.074 06:47:00 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:56.074 06:47:00 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:56.074 06:47:00 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:56.074 06:47:00 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:56.074 06:47:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:56.074 06:47:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:56.074 06:47:00 -- common/autotest_common.sh@10 -- # set +x 00:20:56.074 06:47:00 -- nvmf/common.sh@470 -- # nvmfpid=20257 00:20:56.074 06:47:00 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:56.074 06:47:00 -- nvmf/common.sh@471 -- # waitforlisten 20257 00:20:56.074 06:47:00 -- common/autotest_common.sh@817 -- # '[' -z 20257 ']' 00:20:56.074 06:47:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.074 06:47:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:56.074 06:47:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.074 06:47:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:56.074 06:47:00 -- common/autotest_common.sh@10 -- # set +x 00:20:56.332 [2024-04-17 06:47:00.750908] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:20:56.332 [2024-04-17 06:47:00.750991] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.332 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.332 [2024-04-17 06:47:00.815127] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.332 [2024-04-17 06:47:00.901467] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.332 [2024-04-17 06:47:00.901524] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.332 [2024-04-17 06:47:00.901538] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.332 [2024-04-17 06:47:00.901550] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.332 [2024-04-17 06:47:00.901574] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.332 [2024-04-17 06:47:00.901600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.590 06:47:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:56.590 06:47:01 -- common/autotest_common.sh@850 -- # return 0 00:20:56.590 06:47:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:56.590 06:47:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:56.590 06:47:01 -- common/autotest_common.sh@10 -- # set +x 00:20:56.590 06:47:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.590 06:47:01 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:56.590 06:47:01 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:56.590 06:47:01 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:56.590 06:47:01 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:56.590 06:47:01 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:56.590 06:47:01 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:56.590 06:47:01 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:56.590 06:47:01 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:56.848 [2024-04-17 06:47:01.297093] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.848 [2024-04-17 06:47:01.313111] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:56.848 [2024-04-17 06:47:01.313346] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.848 [2024-04-17 06:47:01.345612] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:56.848 malloc0 00:20:56.848 06:47:01 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:56.848 06:47:01 -- fips/fips.sh@147 -- # bdevperf_pid=20278 00:20:56.848 06:47:01 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:56.848 06:47:01 -- fips/fips.sh@148 -- # waitforlisten 20278 /var/tmp/bdevperf.sock 00:20:56.848 06:47:01 -- common/autotest_common.sh@817 -- # '[' -z 20278 ']' 00:20:56.848 06:47:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:56.848 06:47:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:56.848 06:47:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:56.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:56.848 06:47:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:56.848 06:47:01 -- common/autotest_common.sh@10 -- # set +x 00:20:56.848 [2024-04-17 06:47:01.435087] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:20:56.848 [2024-04-17 06:47:01.435187] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid20278 ] 00:20:57.106 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.106 [2024-04-17 06:47:01.498027] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.106 [2024-04-17 06:47:01.582200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:57.106 06:47:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:57.106 06:47:01 -- common/autotest_common.sh@850 -- # return 0 00:20:57.106 06:47:01 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:57.363 [2024-04-17 06:47:01.900145] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:57.363 [2024-04-17 06:47:01.900301] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:57.620 TLSTESTn1 00:20:57.620 06:47:01 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:57.620 Running I/O for 10 seconds... 00:21:09.873 00:21:09.873 Latency(us) 00:21:09.873 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.873 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:09.873 Verification LBA range: start 0x0 length 0x2000 00:21:09.873 TLSTESTn1 : 10.06 1783.97 6.97 0.00 0.00 71550.69 6043.88 95536.92 00:21:09.873 =================================================================================================================== 00:21:09.873 Total : 1783.97 6.97 0.00 0.00 71550.69 6043.88 95536.92 00:21:09.873 0 00:21:09.873 06:47:12 -- fips/fips.sh@1 -- # cleanup 00:21:09.873 06:47:12 -- fips/fips.sh@15 -- # process_shm --id 0 00:21:09.873 06:47:12 -- common/autotest_common.sh@794 -- # type=--id 00:21:09.873 06:47:12 -- common/autotest_common.sh@795 -- # id=0 00:21:09.873 06:47:12 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:21:09.873 06:47:12 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:09.873 06:47:12 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:21:09.873 06:47:12 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:21:09.873 06:47:12 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:21:09.873 06:47:12 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:09.873 nvmf_trace.0 00:21:09.873 06:47:12 -- common/autotest_common.sh@809 -- # return 0 00:21:09.873 06:47:12 -- fips/fips.sh@16 -- # killprocess 20278 00:21:09.873 06:47:12 -- common/autotest_common.sh@936 -- # '[' -z 20278 ']' 00:21:09.873 06:47:12 -- common/autotest_common.sh@940 -- # kill -0 20278 00:21:09.873 06:47:12 -- common/autotest_common.sh@941 -- # uname 00:21:09.873 06:47:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:09.873 06:47:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 20278 00:21:09.873 06:47:12 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:09.873 06:47:12 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:09.873 06:47:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 20278' 00:21:09.873 killing process with pid 20278 00:21:09.873 06:47:12 -- common/autotest_common.sh@955 -- # kill 20278 00:21:09.873 Received shutdown signal, test time was about 10.000000 seconds 00:21:09.873 00:21:09.873 Latency(us) 00:21:09.873 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.873 =================================================================================================================== 00:21:09.873 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:09.873 [2024-04-17 06:47:12.293170] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:09.873 06:47:12 -- common/autotest_common.sh@960 -- # wait 20278 00:21:09.873 06:47:12 -- fips/fips.sh@17 -- # nvmftestfini 00:21:09.873 06:47:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:09.873 06:47:12 -- nvmf/common.sh@117 -- # sync 00:21:09.873 06:47:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:09.873 06:47:12 -- nvmf/common.sh@120 -- # set +e 00:21:09.873 06:47:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:09.873 06:47:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:09.873 rmmod nvme_tcp 00:21:09.873 rmmod nvme_fabrics 00:21:09.873 rmmod nvme_keyring 00:21:09.873 06:47:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:09.873 06:47:12 -- nvmf/common.sh@124 -- # set -e 00:21:09.873 06:47:12 -- nvmf/common.sh@125 -- # return 0 00:21:09.873 06:47:12 -- nvmf/common.sh@478 -- # '[' -n 20257 ']' 00:21:09.873 06:47:12 -- nvmf/common.sh@479 -- # killprocess 20257 00:21:09.873 06:47:12 -- common/autotest_common.sh@936 -- # '[' -z 20257 ']' 00:21:09.873 06:47:12 -- common/autotest_common.sh@940 -- # kill -0 20257 00:21:09.873 06:47:12 -- common/autotest_common.sh@941 -- # uname 00:21:09.873 06:47:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:09.873 06:47:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 20257 00:21:09.873 06:47:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:09.873 06:47:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:09.873 06:47:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 20257' 00:21:09.873 killing process with pid 20257 00:21:09.873 06:47:12 -- common/autotest_common.sh@955 -- # kill 20257 00:21:09.873 [2024-04-17 06:47:12.609678] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:09.873 06:47:12 -- common/autotest_common.sh@960 -- # wait 20257 00:21:09.873 06:47:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:09.873 06:47:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:09.873 06:47:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:09.873 06:47:12 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:09.873 06:47:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:09.873 06:47:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.873 06:47:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:09.873 06:47:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.439 06:47:14 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:10.439 06:47:14 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:10.439 00:21:10.439 real 0m16.568s 00:21:10.439 user 0m19.928s 00:21:10.439 sys 0m6.734s 00:21:10.439 06:47:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:10.439 06:47:14 -- common/autotest_common.sh@10 -- # set +x 00:21:10.439 ************************************ 00:21:10.439 END TEST nvmf_fips 00:21:10.439 ************************************ 00:21:10.439 06:47:14 -- nvmf/nvmf.sh@64 -- # '[' 1 -eq 1 ']' 00:21:10.439 06:47:14 -- nvmf/nvmf.sh@65 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:10.439 06:47:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:10.439 06:47:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:10.439 06:47:14 -- common/autotest_common.sh@10 -- # set +x 00:21:10.439 ************************************ 00:21:10.439 START TEST nvmf_fuzz 00:21:10.439 ************************************ 00:21:10.439 06:47:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:10.697 * Looking for test storage... 00:21:10.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:10.697 06:47:15 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:10.697 06:47:15 -- nvmf/common.sh@7 -- # uname -s 00:21:10.697 06:47:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.697 06:47:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.697 06:47:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.697 06:47:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.697 06:47:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.697 06:47:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.697 06:47:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.697 06:47:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.697 06:47:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.697 06:47:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.697 06:47:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.697 06:47:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.697 06:47:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.697 06:47:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.697 06:47:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:10.697 06:47:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.697 06:47:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:10.697 06:47:15 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.697 06:47:15 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.697 06:47:15 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.697 06:47:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.697 06:47:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.697 06:47:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.697 06:47:15 -- paths/export.sh@5 -- # export PATH 00:21:10.697 06:47:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.697 06:47:15 -- nvmf/common.sh@47 -- # : 0 00:21:10.697 06:47:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:10.697 06:47:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:10.697 06:47:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.697 06:47:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.697 06:47:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.697 06:47:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:10.697 06:47:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:10.697 06:47:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:10.697 06:47:15 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:21:10.697 06:47:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:10.697 06:47:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.697 06:47:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:10.697 06:47:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:10.697 06:47:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:10.697 06:47:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.697 06:47:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:10.697 06:47:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.697 06:47:15 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:10.697 06:47:15 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:10.697 06:47:15 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:10.697 06:47:15 -- common/autotest_common.sh@10 -- # set +x 00:21:12.598 06:47:17 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:12.598 06:47:17 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:12.598 06:47:17 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:12.598 06:47:17 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:12.598 06:47:17 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:12.598 06:47:17 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:12.598 06:47:17 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:12.598 06:47:17 -- nvmf/common.sh@295 -- # net_devs=() 00:21:12.598 06:47:17 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:12.598 06:47:17 -- nvmf/common.sh@296 -- # e810=() 00:21:12.598 06:47:17 -- nvmf/common.sh@296 -- # local -ga e810 00:21:12.598 06:47:17 -- nvmf/common.sh@297 -- # x722=() 00:21:12.598 06:47:17 -- nvmf/common.sh@297 -- # local -ga x722 00:21:12.598 06:47:17 -- nvmf/common.sh@298 -- # mlx=() 00:21:12.598 06:47:17 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:12.598 06:47:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:12.598 06:47:17 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:12.598 06:47:17 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:12.598 06:47:17 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:12.598 06:47:17 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:12.598 06:47:17 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:12.598 06:47:17 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:12.598 06:47:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:12.598 06:47:17 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:12.598 06:47:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:12.598 06:47:17 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:12.598 06:47:17 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:12.598 06:47:17 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:12.598 06:47:17 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:12.598 06:47:17 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:12.598 06:47:17 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:12.598 06:47:17 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:12.598 06:47:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:12.599 06:47:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:12.599 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:12.599 06:47:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:12.599 06:47:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:12.599 06:47:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.599 06:47:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.599 06:47:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:12.599 06:47:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:12.599 06:47:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:12.599 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:12.599 06:47:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:12.599 06:47:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:12.599 06:47:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.599 06:47:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.599 06:47:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:12.599 06:47:17 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:12.599 06:47:17 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:12.599 06:47:17 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:12.599 06:47:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:12.599 06:47:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.599 06:47:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:12.599 06:47:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.599 06:47:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:12.599 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:12.599 06:47:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.599 06:47:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:12.599 06:47:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.599 06:47:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:12.599 06:47:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.599 06:47:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:12.599 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:12.599 06:47:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.599 06:47:17 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:12.599 06:47:17 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:12.599 06:47:17 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:12.599 06:47:17 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:12.599 06:47:17 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:12.599 06:47:17 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:12.599 06:47:17 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:12.599 06:47:17 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:12.599 06:47:17 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:12.599 06:47:17 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:12.599 06:47:17 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:12.599 06:47:17 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:12.599 06:47:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:12.599 06:47:17 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:12.599 06:47:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:12.599 06:47:17 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:12.599 06:47:17 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:12.599 06:47:17 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:12.599 06:47:17 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:12.599 06:47:17 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:12.599 06:47:17 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:12.599 06:47:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:12.599 06:47:17 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:12.599 06:47:17 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:12.599 06:47:17 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:12.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:12.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:21:12.599 00:21:12.599 --- 10.0.0.2 ping statistics --- 00:21:12.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.599 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:21:12.599 06:47:17 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:12.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:12.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:21:12.599 00:21:12.599 --- 10.0.0.1 ping statistics --- 00:21:12.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.599 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:21:12.599 06:47:17 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:12.599 06:47:17 -- nvmf/common.sh@411 -- # return 0 00:21:12.599 06:47:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:12.599 06:47:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.599 06:47:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:12.599 06:47:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:12.599 06:47:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.599 06:47:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:12.599 06:47:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:12.599 06:47:17 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=23539 00:21:12.599 06:47:17 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:12.599 06:47:17 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:12.599 06:47:17 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 23539 00:21:12.599 06:47:17 -- common/autotest_common.sh@817 -- # '[' -z 23539 ']' 00:21:12.599 06:47:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.599 06:47:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:12.599 06:47:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.599 06:47:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:12.599 06:47:17 -- common/autotest_common.sh@10 -- # set +x 00:21:13.165 06:47:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:13.165 06:47:17 -- common/autotest_common.sh@850 -- # return 0 00:21:13.165 06:47:17 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:13.165 06:47:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.165 06:47:17 -- common/autotest_common.sh@10 -- # set +x 00:21:13.165 06:47:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.165 06:47:17 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:21:13.165 06:47:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.165 06:47:17 -- common/autotest_common.sh@10 -- # set +x 00:21:13.165 Malloc0 00:21:13.165 06:47:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.165 06:47:17 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:13.165 06:47:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.165 06:47:17 -- common/autotest_common.sh@10 -- # set +x 00:21:13.165 06:47:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.165 06:47:17 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:13.165 06:47:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.165 06:47:17 -- common/autotest_common.sh@10 -- # set +x 00:21:13.165 06:47:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.165 06:47:17 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:13.165 06:47:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.165 06:47:17 -- common/autotest_common.sh@10 -- # set +x 00:21:13.165 06:47:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.165 06:47:17 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:21:13.165 06:47:17 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:21:45.228 Fuzzing completed. Shutting down the fuzz application 00:21:45.228 00:21:45.228 Dumping successful admin opcodes: 00:21:45.228 8, 9, 10, 24, 00:21:45.228 Dumping successful io opcodes: 00:21:45.228 0, 9, 00:21:45.228 NS: 0x200003aeff00 I/O qp, Total commands completed: 465200, total successful commands: 2691, random_seed: 3991752256 00:21:45.228 NS: 0x200003aeff00 admin qp, Total commands completed: 56352, total successful commands: 447, random_seed: 1354939584 00:21:45.228 06:47:48 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:21:45.228 Fuzzing completed. Shutting down the fuzz application 00:21:45.228 00:21:45.228 Dumping successful admin opcodes: 00:21:45.228 24, 00:21:45.228 Dumping successful io opcodes: 00:21:45.228 00:21:45.228 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1143718462 00:21:45.228 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1143872010 00:21:45.228 06:47:49 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:45.228 06:47:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.228 06:47:49 -- common/autotest_common.sh@10 -- # set +x 00:21:45.228 06:47:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.228 06:47:49 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:21:45.228 06:47:49 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:21:45.228 06:47:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:45.228 06:47:49 -- nvmf/common.sh@117 -- # sync 00:21:45.228 06:47:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:45.228 06:47:49 -- nvmf/common.sh@120 -- # set +e 00:21:45.228 06:47:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:45.228 06:47:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:45.228 rmmod nvme_tcp 00:21:45.228 rmmod nvme_fabrics 00:21:45.228 rmmod nvme_keyring 00:21:45.228 06:47:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:45.228 06:47:49 -- nvmf/common.sh@124 -- # set -e 00:21:45.228 06:47:49 -- nvmf/common.sh@125 -- # return 0 00:21:45.228 06:47:49 -- nvmf/common.sh@478 -- # '[' -n 23539 ']' 00:21:45.228 06:47:49 -- nvmf/common.sh@479 -- # killprocess 23539 00:21:45.228 06:47:49 -- common/autotest_common.sh@936 -- # '[' -z 23539 ']' 00:21:45.228 06:47:49 -- common/autotest_common.sh@940 -- # kill -0 23539 00:21:45.228 06:47:49 -- common/autotest_common.sh@941 -- # uname 00:21:45.228 06:47:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:45.228 06:47:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 23539 00:21:45.228 06:47:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:45.228 06:47:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:45.228 06:47:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 23539' 00:21:45.228 killing process with pid 23539 00:21:45.228 06:47:49 -- common/autotest_common.sh@955 -- # kill 23539 00:21:45.228 06:47:49 -- common/autotest_common.sh@960 -- # wait 23539 00:21:45.228 06:47:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:45.228 06:47:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:45.228 06:47:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:45.228 06:47:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:45.228 06:47:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:45.228 06:47:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.228 06:47:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:45.228 06:47:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.133 06:47:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:47.133 06:47:51 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:21:47.133 00:21:47.133 real 0m36.676s 00:21:47.133 user 0m50.497s 00:21:47.133 sys 0m15.124s 00:21:47.133 06:47:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:47.133 06:47:51 -- common/autotest_common.sh@10 -- # set +x 00:21:47.133 ************************************ 00:21:47.133 END TEST nvmf_fuzz 00:21:47.133 ************************************ 00:21:47.133 06:47:51 -- nvmf/nvmf.sh@66 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:47.133 06:47:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:47.133 06:47:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:47.133 06:47:51 -- common/autotest_common.sh@10 -- # set +x 00:21:47.391 ************************************ 00:21:47.391 START TEST nvmf_multiconnection 00:21:47.391 ************************************ 00:21:47.391 06:47:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:47.391 * Looking for test storage... 00:21:47.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:47.391 06:47:51 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:47.391 06:47:51 -- nvmf/common.sh@7 -- # uname -s 00:21:47.391 06:47:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.391 06:47:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.391 06:47:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.391 06:47:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.391 06:47:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.391 06:47:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.391 06:47:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.391 06:47:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.391 06:47:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.391 06:47:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.391 06:47:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:47.391 06:47:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:47.391 06:47:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.391 06:47:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.391 06:47:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:47.391 06:47:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.391 06:47:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:47.391 06:47:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.391 06:47:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.391 06:47:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.391 06:47:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.391 06:47:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.391 06:47:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.391 06:47:51 -- paths/export.sh@5 -- # export PATH 00:21:47.391 06:47:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.391 06:47:51 -- nvmf/common.sh@47 -- # : 0 00:21:47.391 06:47:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:47.391 06:47:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:47.391 06:47:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:47.391 06:47:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.391 06:47:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.391 06:47:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:47.391 06:47:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:47.391 06:47:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:47.392 06:47:51 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:47.392 06:47:51 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:47.392 06:47:51 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:21:47.392 06:47:51 -- target/multiconnection.sh@16 -- # nvmftestinit 00:21:47.392 06:47:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:47.392 06:47:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.392 06:47:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:47.392 06:47:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:47.392 06:47:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:47.392 06:47:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.392 06:47:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.392 06:47:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.392 06:47:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:47.392 06:47:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:47.392 06:47:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:47.392 06:47:51 -- common/autotest_common.sh@10 -- # set +x 00:21:49.337 06:47:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:49.337 06:47:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:49.337 06:47:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:49.337 06:47:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:49.337 06:47:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:49.337 06:47:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:49.337 06:47:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:49.337 06:47:53 -- nvmf/common.sh@295 -- # net_devs=() 00:21:49.337 06:47:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:49.337 06:47:53 -- nvmf/common.sh@296 -- # e810=() 00:21:49.337 06:47:53 -- nvmf/common.sh@296 -- # local -ga e810 00:21:49.337 06:47:53 -- nvmf/common.sh@297 -- # x722=() 00:21:49.337 06:47:53 -- nvmf/common.sh@297 -- # local -ga x722 00:21:49.337 06:47:53 -- nvmf/common.sh@298 -- # mlx=() 00:21:49.337 06:47:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:49.337 06:47:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:49.337 06:47:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:49.337 06:47:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:49.337 06:47:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:49.337 06:47:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:49.337 06:47:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:49.337 06:47:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:49.337 06:47:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:49.337 06:47:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:49.337 06:47:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:49.337 06:47:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:49.337 06:47:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:49.337 06:47:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:49.337 06:47:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:49.337 06:47:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:49.337 06:47:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:49.337 06:47:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:49.337 06:47:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:49.337 06:47:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:49.337 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:49.337 06:47:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:49.337 06:47:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:49.337 06:47:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.337 06:47:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.337 06:47:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:49.337 06:47:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:49.337 06:47:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:49.337 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:49.337 06:47:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:49.337 06:47:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:49.337 06:47:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.337 06:47:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.337 06:47:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:49.337 06:47:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:49.337 06:47:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:49.337 06:47:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:49.337 06:47:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:49.337 06:47:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.337 06:47:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:49.337 06:47:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.337 06:47:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:49.337 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:49.337 06:47:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.337 06:47:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:49.337 06:47:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.337 06:47:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:49.337 06:47:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.337 06:47:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:49.337 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:49.337 06:47:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.337 06:47:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:49.337 06:47:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:49.337 06:47:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:49.337 06:47:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:49.337 06:47:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:49.337 06:47:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.337 06:47:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:49.337 06:47:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:49.337 06:47:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:49.337 06:47:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:49.337 06:47:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:49.338 06:47:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:49.338 06:47:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:49.338 06:47:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.338 06:47:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:49.338 06:47:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:49.338 06:47:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:49.338 06:47:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:49.338 06:47:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:49.338 06:47:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:49.338 06:47:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:49.595 06:47:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:49.595 06:47:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:49.595 06:47:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:49.595 06:47:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:49.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:21:49.595 00:21:49.595 --- 10.0.0.2 ping statistics --- 00:21:49.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.595 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:21:49.595 06:47:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:49.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:21:49.595 00:21:49.595 --- 10.0.0.1 ping statistics --- 00:21:49.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.595 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:21:49.595 06:47:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.595 06:47:54 -- nvmf/common.sh@411 -- # return 0 00:21:49.595 06:47:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:49.595 06:47:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.595 06:47:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:49.595 06:47:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:49.595 06:47:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.595 06:47:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:49.595 06:47:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:49.595 06:47:54 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:21:49.595 06:47:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:49.595 06:47:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:49.595 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:49.595 06:47:54 -- nvmf/common.sh@470 -- # nvmfpid=29264 00:21:49.595 06:47:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:49.595 06:47:54 -- nvmf/common.sh@471 -- # waitforlisten 29264 00:21:49.595 06:47:54 -- common/autotest_common.sh@817 -- # '[' -z 29264 ']' 00:21:49.595 06:47:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.595 06:47:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:49.595 06:47:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.595 06:47:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:49.595 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:49.595 [2024-04-17 06:47:54.068312] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:21:49.596 [2024-04-17 06:47:54.068403] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.596 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.596 [2024-04-17 06:47:54.134752] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:49.854 [2024-04-17 06:47:54.225501] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.854 [2024-04-17 06:47:54.225550] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.854 [2024-04-17 06:47:54.225578] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.854 [2024-04-17 06:47:54.225589] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.854 [2024-04-17 06:47:54.225599] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.854 [2024-04-17 06:47:54.225672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.854 [2024-04-17 06:47:54.225763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.854 [2024-04-17 06:47:54.225790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:49.854 [2024-04-17 06:47:54.225793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.854 06:47:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:49.854 06:47:54 -- common/autotest_common.sh@850 -- # return 0 00:21:49.854 06:47:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:49.854 06:47:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:49.854 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:49.854 06:47:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.854 06:47:54 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:49.854 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.854 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:49.854 [2024-04-17 06:47:54.383998] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.854 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:49.854 06:47:54 -- target/multiconnection.sh@21 -- # seq 1 11 00:21:49.854 06:47:54 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:49.854 06:47:54 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:49.854 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.854 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:49.854 Malloc1 00:21:49.854 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:49.854 06:47:54 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:21:49.854 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.854 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:49.854 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:49.854 06:47:54 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:49.854 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.854 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:49.854 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:49.854 06:47:54 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:49.854 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.854 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:49.854 [2024-04-17 06:47:54.441397] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.854 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:49.854 06:47:54 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:49.854 06:47:54 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:21:49.854 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.854 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.113 Malloc2 00:21:50.113 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.113 06:47:54 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:50.113 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.113 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.113 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.113 06:47:54 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:21:50.113 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.113 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.113 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.113 06:47:54 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:50.113 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.113 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.113 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.113 06:47:54 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:50.113 06:47:54 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:21:50.113 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.113 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.113 Malloc3 00:21:50.113 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.113 06:47:54 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:21:50.113 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.113 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.113 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.113 06:47:54 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:21:50.113 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.113 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.113 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.113 06:47:54 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:50.113 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.113 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.113 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.114 06:47:54 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:50.114 06:47:54 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:21:50.114 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.114 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.114 Malloc4 00:21:50.114 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.114 06:47:54 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:21:50.114 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.114 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.114 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.114 06:47:54 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:21:50.114 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.114 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.114 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.114 06:47:54 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:21:50.114 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.114 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.114 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.114 06:47:54 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:50.114 06:47:54 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:21:50.114 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.114 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.114 Malloc5 00:21:50.114 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.114 06:47:54 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:21:50.114 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.114 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.114 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.114 06:47:54 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:21:50.114 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.114 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.114 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.114 06:47:54 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:21:50.114 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.114 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.114 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.114 06:47:54 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:50.114 06:47:54 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:21:50.114 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.114 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.114 Malloc6 00:21:50.114 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.114 06:47:54 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:21:50.114 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.114 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.114 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.114 06:47:54 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:21:50.114 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.114 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.114 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.114 06:47:54 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:21:50.114 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.114 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.114 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.114 06:47:54 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:50.114 06:47:54 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:21:50.114 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.114 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.114 Malloc7 00:21:50.114 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.114 06:47:54 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:21:50.114 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.114 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.114 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.114 06:47:54 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:21:50.114 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.114 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.372 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.372 06:47:54 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:21:50.372 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.372 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.372 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.372 06:47:54 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:50.372 06:47:54 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:21:50.372 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.372 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.372 Malloc8 00:21:50.372 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.372 06:47:54 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:21:50.372 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.372 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.372 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.373 06:47:54 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:21:50.373 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.373 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.373 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.373 06:47:54 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:21:50.373 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.373 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.373 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.373 06:47:54 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:50.373 06:47:54 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:21:50.373 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.373 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.373 Malloc9 00:21:50.373 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.373 06:47:54 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:21:50.373 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.373 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.373 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.373 06:47:54 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:21:50.373 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.373 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.373 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.373 06:47:54 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:21:50.373 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.373 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.373 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.373 06:47:54 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:50.373 06:47:54 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:21:50.373 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.373 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.373 Malloc10 00:21:50.373 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.373 06:47:54 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:21:50.373 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.373 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.373 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.373 06:47:54 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:21:50.373 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.373 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.373 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.373 06:47:54 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:21:50.373 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.373 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.373 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.373 06:47:54 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:50.373 06:47:54 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:21:50.373 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.373 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.373 Malloc11 00:21:50.373 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.373 06:47:54 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:21:50.373 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.373 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.373 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.373 06:47:54 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:21:50.373 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.373 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.373 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.373 06:47:54 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:21:50.373 06:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.373 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.373 06:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.373 06:47:54 -- target/multiconnection.sh@28 -- # seq 1 11 00:21:50.373 06:47:54 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:50.373 06:47:54 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:51.306 06:47:55 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:21:51.306 06:47:55 -- common/autotest_common.sh@1184 -- # local i=0 00:21:51.306 06:47:55 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:21:51.306 06:47:55 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:21:51.306 06:47:55 -- common/autotest_common.sh@1191 -- # sleep 2 00:21:53.244 06:47:57 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:21:53.244 06:47:57 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:21:53.244 06:47:57 -- common/autotest_common.sh@1193 -- # grep -c SPDK1 00:21:53.244 06:47:57 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:21:53.244 06:47:57 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:21:53.244 06:47:57 -- common/autotest_common.sh@1194 -- # return 0 00:21:53.244 06:47:57 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:53.244 06:47:57 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:21:53.808 06:47:58 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:21:53.808 06:47:58 -- common/autotest_common.sh@1184 -- # local i=0 00:21:53.808 06:47:58 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:21:53.808 06:47:58 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:21:53.808 06:47:58 -- common/autotest_common.sh@1191 -- # sleep 2 00:21:55.705 06:48:00 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:21:55.705 06:48:00 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:21:55.705 06:48:00 -- common/autotest_common.sh@1193 -- # grep -c SPDK2 00:21:55.705 06:48:00 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:21:55.705 06:48:00 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:21:55.705 06:48:00 -- common/autotest_common.sh@1194 -- # return 0 00:21:55.705 06:48:00 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:55.705 06:48:00 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:21:56.269 06:48:00 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:21:56.269 06:48:00 -- common/autotest_common.sh@1184 -- # local i=0 00:21:56.269 06:48:00 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:21:56.269 06:48:00 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:21:56.269 06:48:00 -- common/autotest_common.sh@1191 -- # sleep 2 00:21:58.793 06:48:02 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:21:58.793 06:48:02 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:21:58.793 06:48:02 -- common/autotest_common.sh@1193 -- # grep -c SPDK3 00:21:58.793 06:48:02 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:21:58.793 06:48:02 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:21:58.793 06:48:02 -- common/autotest_common.sh@1194 -- # return 0 00:21:58.793 06:48:02 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:58.793 06:48:02 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:21:59.051 06:48:03 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:21:59.051 06:48:03 -- common/autotest_common.sh@1184 -- # local i=0 00:21:59.051 06:48:03 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:21:59.051 06:48:03 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:21:59.051 06:48:03 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:01.576 06:48:05 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:01.576 06:48:05 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:01.576 06:48:05 -- common/autotest_common.sh@1193 -- # grep -c SPDK4 00:22:01.576 06:48:05 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:01.576 06:48:05 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:01.576 06:48:05 -- common/autotest_common.sh@1194 -- # return 0 00:22:01.576 06:48:05 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:01.576 06:48:05 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:22:01.833 06:48:06 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:22:01.833 06:48:06 -- common/autotest_common.sh@1184 -- # local i=0 00:22:01.833 06:48:06 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:01.833 06:48:06 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:01.833 06:48:06 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:03.731 06:48:08 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:03.731 06:48:08 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:03.731 06:48:08 -- common/autotest_common.sh@1193 -- # grep -c SPDK5 00:22:03.731 06:48:08 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:03.731 06:48:08 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:03.731 06:48:08 -- common/autotest_common.sh@1194 -- # return 0 00:22:03.731 06:48:08 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:03.731 06:48:08 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:22:04.663 06:48:09 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:22:04.663 06:48:09 -- common/autotest_common.sh@1184 -- # local i=0 00:22:04.663 06:48:09 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:04.663 06:48:09 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:04.663 06:48:09 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:06.560 06:48:11 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:06.560 06:48:11 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:06.560 06:48:11 -- common/autotest_common.sh@1193 -- # grep -c SPDK6 00:22:06.560 06:48:11 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:06.560 06:48:11 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:06.560 06:48:11 -- common/autotest_common.sh@1194 -- # return 0 00:22:06.560 06:48:11 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.560 06:48:11 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:22:07.503 06:48:11 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:22:07.503 06:48:11 -- common/autotest_common.sh@1184 -- # local i=0 00:22:07.503 06:48:11 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:07.503 06:48:11 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:07.503 06:48:11 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:09.401 06:48:13 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:09.401 06:48:13 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:09.401 06:48:13 -- common/autotest_common.sh@1193 -- # grep -c SPDK7 00:22:09.401 06:48:13 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:09.401 06:48:13 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:09.401 06:48:13 -- common/autotest_common.sh@1194 -- # return 0 00:22:09.401 06:48:13 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:09.401 06:48:13 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:22:10.341 06:48:14 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:22:10.341 06:48:14 -- common/autotest_common.sh@1184 -- # local i=0 00:22:10.341 06:48:14 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:10.341 06:48:14 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:10.341 06:48:14 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:12.268 06:48:16 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:12.268 06:48:16 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:12.268 06:48:16 -- common/autotest_common.sh@1193 -- # grep -c SPDK8 00:22:12.268 06:48:16 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:12.268 06:48:16 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:12.268 06:48:16 -- common/autotest_common.sh@1194 -- # return 0 00:22:12.268 06:48:16 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:12.268 06:48:16 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:22:12.833 06:48:17 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:22:12.833 06:48:17 -- common/autotest_common.sh@1184 -- # local i=0 00:22:12.833 06:48:17 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:12.833 06:48:17 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:12.833 06:48:17 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:15.356 06:48:19 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:15.356 06:48:19 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:15.356 06:48:19 -- common/autotest_common.sh@1193 -- # grep -c SPDK9 00:22:15.356 06:48:19 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:15.356 06:48:19 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:15.356 06:48:19 -- common/autotest_common.sh@1194 -- # return 0 00:22:15.356 06:48:19 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:15.356 06:48:19 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:22:15.921 06:48:20 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:22:15.921 06:48:20 -- common/autotest_common.sh@1184 -- # local i=0 00:22:15.921 06:48:20 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:15.921 06:48:20 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:15.921 06:48:20 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:17.819 06:48:22 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:17.819 06:48:22 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:17.819 06:48:22 -- common/autotest_common.sh@1193 -- # grep -c SPDK10 00:22:17.819 06:48:22 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:17.819 06:48:22 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:17.819 06:48:22 -- common/autotest_common.sh@1194 -- # return 0 00:22:17.819 06:48:22 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:17.819 06:48:22 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:22:18.384 06:48:22 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:22:18.384 06:48:22 -- common/autotest_common.sh@1184 -- # local i=0 00:22:18.384 06:48:22 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:18.384 06:48:22 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:18.384 06:48:22 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:20.909 06:48:24 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:20.909 06:48:24 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:20.909 06:48:24 -- common/autotest_common.sh@1193 -- # grep -c SPDK11 00:22:20.909 06:48:24 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:20.909 06:48:24 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:20.909 06:48:24 -- common/autotest_common.sh@1194 -- # return 0 00:22:20.909 06:48:24 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:22:20.909 [global] 00:22:20.909 thread=1 00:22:20.909 invalidate=1 00:22:20.909 rw=read 00:22:20.909 time_based=1 00:22:20.909 runtime=10 00:22:20.909 ioengine=libaio 00:22:20.909 direct=1 00:22:20.909 bs=262144 00:22:20.909 iodepth=64 00:22:20.909 norandommap=1 00:22:20.909 numjobs=1 00:22:20.909 00:22:20.909 [job0] 00:22:20.909 filename=/dev/nvme0n1 00:22:20.909 [job1] 00:22:20.909 filename=/dev/nvme10n1 00:22:20.909 [job2] 00:22:20.909 filename=/dev/nvme1n1 00:22:20.909 [job3] 00:22:20.909 filename=/dev/nvme2n1 00:22:20.909 [job4] 00:22:20.909 filename=/dev/nvme3n1 00:22:20.909 [job5] 00:22:20.909 filename=/dev/nvme4n1 00:22:20.909 [job6] 00:22:20.909 filename=/dev/nvme5n1 00:22:20.909 [job7] 00:22:20.909 filename=/dev/nvme6n1 00:22:20.909 [job8] 00:22:20.909 filename=/dev/nvme7n1 00:22:20.909 [job9] 00:22:20.909 filename=/dev/nvme8n1 00:22:20.909 [job10] 00:22:20.909 filename=/dev/nvme9n1 00:22:20.909 Could not set queue depth (nvme0n1) 00:22:20.909 Could not set queue depth (nvme10n1) 00:22:20.909 Could not set queue depth (nvme1n1) 00:22:20.909 Could not set queue depth (nvme2n1) 00:22:20.909 Could not set queue depth (nvme3n1) 00:22:20.909 Could not set queue depth (nvme4n1) 00:22:20.909 Could not set queue depth (nvme5n1) 00:22:20.909 Could not set queue depth (nvme6n1) 00:22:20.909 Could not set queue depth (nvme7n1) 00:22:20.909 Could not set queue depth (nvme8n1) 00:22:20.909 Could not set queue depth (nvme9n1) 00:22:20.909 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:20.909 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:20.909 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:20.909 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:20.909 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:20.909 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:20.909 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:20.909 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:20.909 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:20.909 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:20.910 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:20.910 fio-3.35 00:22:20.910 Starting 11 threads 00:22:33.198 00:22:33.198 job0: (groupid=0, jobs=1): err= 0: pid=34001: Wed Apr 17 06:48:35 2024 00:22:33.198 read: IOPS=657, BW=164MiB/s (172MB/s)(1654MiB/10066msec) 00:22:33.198 slat (usec): min=13, max=79202, avg=1488.64, stdev=4201.79 00:22:33.198 clat (msec): min=14, max=216, avg=95.82, stdev=31.25 00:22:33.198 lat (msec): min=14, max=237, avg=97.31, stdev=31.78 00:22:33.198 clat percentiles (msec): 00:22:33.199 | 1.00th=[ 47], 5.00th=[ 61], 10.00th=[ 67], 20.00th=[ 73], 00:22:33.199 | 30.00th=[ 78], 40.00th=[ 82], 50.00th=[ 86], 60.00th=[ 93], 00:22:33.199 | 70.00th=[ 104], 80.00th=[ 121], 90.00th=[ 144], 95.00th=[ 163], 00:22:33.199 | 99.00th=[ 186], 99.50th=[ 190], 99.90th=[ 211], 99.95th=[ 211], 00:22:33.199 | 99.99th=[ 218] 00:22:33.199 bw ( KiB/s): min=85504, max=239616, per=10.43%, avg=167738.85, stdev=45116.72, samples=20 00:22:33.199 iops : min= 334, max= 936, avg=655.10, stdev=176.31, samples=20 00:22:33.199 lat (msec) : 20=0.27%, 50=1.27%, 100=65.75%, 250=32.71% 00:22:33.199 cpu : usr=0.43%, sys=2.16%, ctx=1299, majf=0, minf=4097 00:22:33.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:22:33.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:33.199 issued rwts: total=6616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.199 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:33.199 job1: (groupid=0, jobs=1): err= 0: pid=34002: Wed Apr 17 06:48:35 2024 00:22:33.199 read: IOPS=1277, BW=319MiB/s (335MB/s)(3206MiB/10039msec) 00:22:33.199 slat (usec): min=14, max=137960, avg=750.02, stdev=2852.51 00:22:33.199 clat (msec): min=2, max=302, avg=49.32, stdev=29.43 00:22:33.199 lat (msec): min=2, max=302, avg=50.07, stdev=29.84 00:22:33.199 clat percentiles (msec): 00:22:33.199 | 1.00th=[ 13], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 32], 00:22:33.199 | 30.00th=[ 33], 40.00th=[ 35], 50.00th=[ 42], 60.00th=[ 47], 00:22:33.199 | 70.00th=[ 53], 80.00th=[ 61], 90.00th=[ 78], 95.00th=[ 100], 00:22:33.199 | 99.00th=[ 184], 99.50th=[ 194], 99.90th=[ 239], 99.95th=[ 241], 00:22:33.199 | 99.99th=[ 275] 00:22:33.199 bw ( KiB/s): min=143872, max=516096, per=20.30%, avg=326662.05, stdev=112602.76, samples=20 00:22:33.199 iops : min= 562, max= 2016, avg=1275.90, stdev=439.86, samples=20 00:22:33.199 lat (msec) : 4=0.11%, 10=0.55%, 20=1.65%, 50=63.72%, 100=29.13% 00:22:33.199 lat (msec) : 250=4.79%, 500=0.05% 00:22:33.199 cpu : usr=0.74%, sys=4.02%, ctx=2508, majf=0, minf=4097 00:22:33.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:22:33.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:33.199 issued rwts: total=12824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.199 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:33.199 job2: (groupid=0, jobs=1): err= 0: pid=34003: Wed Apr 17 06:48:35 2024 00:22:33.199 read: IOPS=616, BW=154MiB/s (162MB/s)(1552MiB/10068msec) 00:22:33.199 slat (usec): min=10, max=65837, avg=1363.12, stdev=4008.83 00:22:33.199 clat (msec): min=8, max=261, avg=102.36, stdev=40.56 00:22:33.199 lat (msec): min=8, max=265, avg=103.73, stdev=40.83 00:22:33.199 clat percentiles (msec): 00:22:33.199 | 1.00th=[ 22], 5.00th=[ 59], 10.00th=[ 65], 20.00th=[ 72], 00:22:33.199 | 30.00th=[ 78], 40.00th=[ 83], 50.00th=[ 90], 60.00th=[ 99], 00:22:33.199 | 70.00th=[ 114], 80.00th=[ 134], 90.00th=[ 167], 95.00th=[ 188], 00:22:33.199 | 99.00th=[ 226], 99.50th=[ 241], 99.90th=[ 247], 99.95th=[ 253], 00:22:33.199 | 99.99th=[ 262] 00:22:33.199 bw ( KiB/s): min=90112, max=256000, per=9.77%, avg=157252.75, stdev=49218.19, samples=20 00:22:33.199 iops : min= 352, max= 1000, avg=614.20, stdev=192.24, samples=20 00:22:33.199 lat (msec) : 10=0.06%, 20=0.85%, 50=1.00%, 100=59.88%, 250=38.10% 00:22:33.199 lat (msec) : 500=0.10% 00:22:33.199 cpu : usr=0.35%, sys=1.99%, ctx=1370, majf=0, minf=3721 00:22:33.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:33.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:33.199 issued rwts: total=6207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.199 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:33.199 job3: (groupid=0, jobs=1): err= 0: pid=34004: Wed Apr 17 06:48:35 2024 00:22:33.199 read: IOPS=473, BW=118MiB/s (124MB/s)(1202MiB/10151msec) 00:22:33.199 slat (usec): min=11, max=86532, avg=1780.24, stdev=6538.88 00:22:33.199 clat (msec): min=8, max=349, avg=133.22, stdev=75.89 00:22:33.199 lat (msec): min=8, max=365, avg=135.00, stdev=77.05 00:22:33.199 clat percentiles (msec): 00:22:33.199 | 1.00th=[ 39], 5.00th=[ 57], 10.00th=[ 64], 20.00th=[ 71], 00:22:33.199 | 30.00th=[ 78], 40.00th=[ 86], 50.00th=[ 101], 60.00th=[ 122], 00:22:33.199 | 70.00th=[ 167], 80.00th=[ 213], 90.00th=[ 259], 95.00th=[ 279], 00:22:33.199 | 99.00th=[ 300], 99.50th=[ 313], 99.90th=[ 351], 99.95th=[ 351], 00:22:33.199 | 99.99th=[ 351] 00:22:33.199 bw ( KiB/s): min=56832, max=248832, per=7.55%, avg=121445.00, stdev=58939.31, samples=20 00:22:33.199 iops : min= 222, max= 972, avg=474.35, stdev=230.21, samples=20 00:22:33.199 lat (msec) : 10=0.02%, 20=0.40%, 50=2.56%, 100=47.03%, 250=36.92% 00:22:33.199 lat (msec) : 500=13.08% 00:22:33.199 cpu : usr=0.29%, sys=1.53%, ctx=988, majf=0, minf=4097 00:22:33.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:33.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:33.199 issued rwts: total=4808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.199 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:33.199 job4: (groupid=0, jobs=1): err= 0: pid=34005: Wed Apr 17 06:48:35 2024 00:22:33.199 read: IOPS=323, BW=80.9MiB/s (84.9MB/s)(823MiB/10165msec) 00:22:33.199 slat (usec): min=10, max=143684, avg=2718.91, stdev=10024.88 00:22:33.199 clat (msec): min=9, max=536, avg=194.84, stdev=76.80 00:22:33.199 lat (msec): min=9, max=536, avg=197.56, stdev=78.10 00:22:33.199 clat percentiles (msec): 00:22:33.199 | 1.00th=[ 49], 5.00th=[ 73], 10.00th=[ 97], 20.00th=[ 127], 00:22:33.199 | 30.00th=[ 146], 40.00th=[ 169], 50.00th=[ 197], 60.00th=[ 222], 00:22:33.199 | 70.00th=[ 241], 80.00th=[ 262], 90.00th=[ 288], 95.00th=[ 305], 00:22:33.199 | 99.00th=[ 418], 99.50th=[ 447], 99.90th=[ 506], 99.95th=[ 535], 00:22:33.199 | 99.99th=[ 535] 00:22:33.199 bw ( KiB/s): min=49664, max=134144, per=5.13%, avg=82594.20, stdev=24583.91, samples=20 00:22:33.199 iops : min= 194, max= 524, avg=322.60, stdev=96.03, samples=20 00:22:33.199 lat (msec) : 10=0.21%, 20=0.03%, 50=1.06%, 100=9.66%, 250=63.51% 00:22:33.199 lat (msec) : 500=25.28%, 750=0.24% 00:22:33.199 cpu : usr=0.19%, sys=1.03%, ctx=693, majf=0, minf=4097 00:22:33.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:22:33.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:33.199 issued rwts: total=3291,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.199 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:33.199 job5: (groupid=0, jobs=1): err= 0: pid=34006: Wed Apr 17 06:48:35 2024 00:22:33.199 read: IOPS=399, BW=100.0MiB/s (105MB/s)(1016MiB/10165msec) 00:22:33.199 slat (usec): min=10, max=116441, avg=1882.03, stdev=7630.63 00:22:33.199 clat (msec): min=2, max=362, avg=158.04, stdev=81.89 00:22:33.199 lat (msec): min=2, max=362, avg=159.92, stdev=83.54 00:22:33.199 clat percentiles (msec): 00:22:33.199 | 1.00th=[ 8], 5.00th=[ 28], 10.00th=[ 61], 20.00th=[ 86], 00:22:33.199 | 30.00th=[ 103], 40.00th=[ 120], 50.00th=[ 148], 60.00th=[ 182], 00:22:33.199 | 70.00th=[ 203], 80.00th=[ 247], 90.00th=[ 275], 95.00th=[ 296], 00:22:33.199 | 99.00th=[ 313], 99.50th=[ 326], 99.90th=[ 351], 99.95th=[ 363], 00:22:33.199 | 99.99th=[ 363] 00:22:33.199 bw ( KiB/s): min=52224, max=172544, per=6.36%, avg=102397.80, stdev=42821.86, samples=20 00:22:33.199 iops : min= 204, max= 674, avg=399.90, stdev=167.19, samples=20 00:22:33.199 lat (msec) : 4=0.05%, 10=1.45%, 20=1.87%, 50=5.22%, 100=19.90% 00:22:33.199 lat (msec) : 250=52.47%, 500=19.04% 00:22:33.199 cpu : usr=0.17%, sys=1.29%, ctx=1021, majf=0, minf=4097 00:22:33.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:22:33.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:33.199 issued rwts: total=4065,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.199 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:33.199 job6: (groupid=0, jobs=1): err= 0: pid=34008: Wed Apr 17 06:48:35 2024 00:22:33.199 read: IOPS=461, BW=115MiB/s (121MB/s)(1173MiB/10160msec) 00:22:33.199 slat (usec): min=8, max=370831, avg=1539.05, stdev=9078.34 00:22:33.199 clat (usec): min=1824, max=647540, avg=136975.22, stdev=100752.34 00:22:33.199 lat (usec): min=1851, max=656441, avg=138514.27, stdev=102420.14 00:22:33.199 clat percentiles (msec): 00:22:33.199 | 1.00th=[ 8], 5.00th=[ 16], 10.00th=[ 24], 20.00th=[ 53], 00:22:33.199 | 30.00th=[ 68], 40.00th=[ 79], 50.00th=[ 106], 60.00th=[ 140], 00:22:33.199 | 70.00th=[ 199], 80.00th=[ 243], 90.00th=[ 279], 95.00th=[ 300], 00:22:33.199 | 99.00th=[ 430], 99.50th=[ 518], 99.90th=[ 600], 99.95th=[ 600], 00:22:33.199 | 99.99th=[ 651] 00:22:33.199 bw ( KiB/s): min=52736, max=254464, per=7.36%, avg=118440.55, stdev=64923.25, samples=20 00:22:33.199 iops : min= 206, max= 994, avg=462.60, stdev=253.59, samples=20 00:22:33.199 lat (msec) : 2=0.02%, 4=0.19%, 10=1.81%, 20=5.44%, 50=11.70% 00:22:33.199 lat (msec) : 100=28.86%, 250=34.09%, 500=17.12%, 750=0.77% 00:22:33.199 cpu : usr=0.32%, sys=1.27%, ctx=1237, majf=0, minf=4097 00:22:33.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:33.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:33.199 issued rwts: total=4691,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.199 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:33.199 job7: (groupid=0, jobs=1): err= 0: pid=34012: Wed Apr 17 06:48:35 2024 00:22:33.199 read: IOPS=305, BW=76.4MiB/s (80.1MB/s)(777MiB/10163msec) 00:22:33.199 slat (usec): min=13, max=119507, avg=3219.99, stdev=9306.09 00:22:33.199 clat (msec): min=43, max=568, avg=206.03, stdev=75.92 00:22:33.199 lat (msec): min=44, max=568, avg=209.25, stdev=77.07 00:22:33.199 clat percentiles (msec): 00:22:33.199 | 1.00th=[ 65], 5.00th=[ 96], 10.00th=[ 110], 20.00th=[ 133], 00:22:33.199 | 30.00th=[ 157], 40.00th=[ 188], 50.00th=[ 207], 60.00th=[ 226], 00:22:33.199 | 70.00th=[ 249], 80.00th=[ 271], 90.00th=[ 296], 95.00th=[ 317], 00:22:33.199 | 99.00th=[ 456], 99.50th=[ 493], 99.90th=[ 502], 99.95th=[ 514], 00:22:33.199 | 99.99th=[ 567] 00:22:33.200 bw ( KiB/s): min=50176, max=132608, per=4.84%, avg=77866.45, stdev=24652.51, samples=20 00:22:33.200 iops : min= 196, max= 518, avg=304.15, stdev=96.29, samples=20 00:22:33.200 lat (msec) : 50=0.61%, 100=6.99%, 250=62.56%, 500=29.65%, 750=0.19% 00:22:33.200 cpu : usr=0.25%, sys=1.02%, ctx=621, majf=0, minf=4097 00:22:33.200 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:22:33.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:33.200 issued rwts: total=3106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.200 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:33.200 job8: (groupid=0, jobs=1): err= 0: pid=34019: Wed Apr 17 06:48:35 2024 00:22:33.200 read: IOPS=382, BW=95.5MiB/s (100MB/s)(962MiB/10068msec) 00:22:33.200 slat (usec): min=14, max=106878, avg=2468.01, stdev=7602.34 00:22:33.200 clat (msec): min=10, max=379, avg=164.95, stdev=75.38 00:22:33.200 lat (msec): min=10, max=403, avg=167.42, stdev=76.65 00:22:33.200 clat percentiles (msec): 00:22:33.200 | 1.00th=[ 23], 5.00th=[ 71], 10.00th=[ 85], 20.00th=[ 99], 00:22:33.200 | 30.00th=[ 108], 40.00th=[ 128], 50.00th=[ 144], 60.00th=[ 174], 00:22:33.200 | 70.00th=[ 207], 80.00th=[ 253], 90.00th=[ 279], 95.00th=[ 296], 00:22:33.200 | 99.00th=[ 313], 99.50th=[ 317], 99.90th=[ 368], 99.95th=[ 368], 00:22:33.200 | 99.99th=[ 380] 00:22:33.200 bw ( KiB/s): min=45568, max=187392, per=6.02%, avg=96823.10, stdev=41467.22, samples=20 00:22:33.200 iops : min= 178, max= 732, avg=378.15, stdev=161.94, samples=20 00:22:33.200 lat (msec) : 20=0.68%, 50=2.18%, 100=19.34%, 250=57.18%, 500=20.62% 00:22:33.200 cpu : usr=0.26%, sys=1.34%, ctx=906, majf=0, minf=4097 00:22:33.200 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:22:33.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:33.200 issued rwts: total=3846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.200 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:33.200 job9: (groupid=0, jobs=1): err= 0: pid=34020: Wed Apr 17 06:48:35 2024 00:22:33.200 read: IOPS=662, BW=166MiB/s (174MB/s)(1668MiB/10064msec) 00:22:33.200 slat (usec): min=13, max=60525, avg=1494.01, stdev=4035.44 00:22:33.200 clat (msec): min=29, max=220, avg=95.00, stdev=31.49 00:22:33.200 lat (msec): min=29, max=223, avg=96.49, stdev=31.98 00:22:33.200 clat percentiles (msec): 00:22:33.200 | 1.00th=[ 48], 5.00th=[ 57], 10.00th=[ 64], 20.00th=[ 72], 00:22:33.200 | 30.00th=[ 77], 40.00th=[ 82], 50.00th=[ 87], 60.00th=[ 93], 00:22:33.200 | 70.00th=[ 103], 80.00th=[ 117], 90.00th=[ 144], 95.00th=[ 163], 00:22:33.200 | 99.00th=[ 186], 99.50th=[ 190], 99.90th=[ 207], 99.95th=[ 213], 00:22:33.200 | 99.99th=[ 222] 00:22:33.200 bw ( KiB/s): min=94208, max=287232, per=10.51%, avg=169146.50, stdev=47796.87, samples=20 00:22:33.200 iops : min= 368, max= 1122, avg=660.65, stdev=186.73, samples=20 00:22:33.200 lat (msec) : 50=2.26%, 100=66.18%, 250=31.55% 00:22:33.200 cpu : usr=0.40%, sys=2.17%, ctx=1350, majf=0, minf=4097 00:22:33.200 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:33.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:33.200 issued rwts: total=6671,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.200 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:33.200 job10: (groupid=0, jobs=1): err= 0: pid=34021: Wed Apr 17 06:48:35 2024 00:22:33.200 read: IOPS=773, BW=193MiB/s (203MB/s)(1940MiB/10037msec) 00:22:33.200 slat (usec): min=9, max=226708, avg=910.40, stdev=5291.84 00:22:33.200 clat (usec): min=924, max=515860, avg=81815.11, stdev=67537.00 00:22:33.200 lat (usec): min=948, max=515879, avg=82725.52, stdev=68464.35 00:22:33.200 clat percentiles (msec): 00:22:33.200 | 1.00th=[ 6], 5.00th=[ 11], 10.00th=[ 22], 20.00th=[ 36], 00:22:33.200 | 30.00th=[ 53], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 74], 00:22:33.200 | 70.00th=[ 83], 80.00th=[ 97], 90.00th=[ 157], 95.00th=[ 275], 00:22:33.200 | 99.00th=[ 313], 99.50th=[ 338], 99.90th=[ 359], 99.95th=[ 397], 00:22:33.200 | 99.99th=[ 514] 00:22:33.200 bw ( KiB/s): min=50688, max=333312, per=12.24%, avg=197001.55, stdev=85611.06, samples=20 00:22:33.200 iops : min= 198, max= 1302, avg=769.50, stdev=334.42, samples=20 00:22:33.200 lat (usec) : 1000=0.01% 00:22:33.200 lat (msec) : 2=0.23%, 4=0.30%, 10=4.46%, 20=4.02%, 50=18.96% 00:22:33.200 lat (msec) : 100=53.16%, 250=12.54%, 500=6.31%, 750=0.01% 00:22:33.200 cpu : usr=0.34%, sys=2.15%, ctx=1697, majf=0, minf=4097 00:22:33.200 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:33.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:33.200 issued rwts: total=7760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.200 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:33.200 00:22:33.200 Run status group 0 (all jobs): 00:22:33.200 READ: bw=1571MiB/s (1648MB/s), 76.4MiB/s-319MiB/s (80.1MB/s-335MB/s), io=15.6GiB (16.7GB), run=10037-10165msec 00:22:33.200 00:22:33.200 Disk stats (read/write): 00:22:33.200 nvme0n1: ios=13001/0, merge=0/0, ticks=1234104/0, in_queue=1234104, util=97.20% 00:22:33.200 nvme10n1: ios=25427/0, merge=0/0, ticks=1235479/0, in_queue=1235479, util=97.40% 00:22:33.200 nvme1n1: ios=12198/0, merge=0/0, ticks=1240118/0, in_queue=1240118, util=97.68% 00:22:33.200 nvme2n1: ios=9449/0, merge=0/0, ticks=1228798/0, in_queue=1228798, util=97.81% 00:22:33.200 nvme3n1: ios=6382/0, merge=0/0, ticks=1224426/0, in_queue=1224426, util=97.87% 00:22:33.200 nvme4n1: ios=7935/0, merge=0/0, ticks=1227168/0, in_queue=1227168, util=98.19% 00:22:33.200 nvme5n1: ios=9213/0, merge=0/0, ticks=1230598/0, in_queue=1230598, util=98.35% 00:22:33.200 nvme6n1: ios=6069/0, merge=0/0, ticks=1228581/0, in_queue=1228581, util=98.46% 00:22:33.200 nvme7n1: ios=7500/0, merge=0/0, ticks=1230526/0, in_queue=1230526, util=98.93% 00:22:33.200 nvme8n1: ios=13140/0, merge=0/0, ticks=1232154/0, in_queue=1232154, util=99.10% 00:22:33.200 nvme9n1: ios=15290/0, merge=0/0, ticks=1239543/0, in_queue=1239543, util=99.24% 00:22:33.200 06:48:35 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:22:33.200 [global] 00:22:33.200 thread=1 00:22:33.200 invalidate=1 00:22:33.200 rw=randwrite 00:22:33.200 time_based=1 00:22:33.200 runtime=10 00:22:33.200 ioengine=libaio 00:22:33.200 direct=1 00:22:33.200 bs=262144 00:22:33.200 iodepth=64 00:22:33.200 norandommap=1 00:22:33.200 numjobs=1 00:22:33.200 00:22:33.200 [job0] 00:22:33.200 filename=/dev/nvme0n1 00:22:33.200 [job1] 00:22:33.200 filename=/dev/nvme10n1 00:22:33.200 [job2] 00:22:33.200 filename=/dev/nvme1n1 00:22:33.200 [job3] 00:22:33.200 filename=/dev/nvme2n1 00:22:33.200 [job4] 00:22:33.200 filename=/dev/nvme3n1 00:22:33.200 [job5] 00:22:33.200 filename=/dev/nvme4n1 00:22:33.200 [job6] 00:22:33.200 filename=/dev/nvme5n1 00:22:33.200 [job7] 00:22:33.200 filename=/dev/nvme6n1 00:22:33.200 [job8] 00:22:33.200 filename=/dev/nvme7n1 00:22:33.200 [job9] 00:22:33.200 filename=/dev/nvme8n1 00:22:33.200 [job10] 00:22:33.200 filename=/dev/nvme9n1 00:22:33.200 Could not set queue depth (nvme0n1) 00:22:33.200 Could not set queue depth (nvme10n1) 00:22:33.200 Could not set queue depth (nvme1n1) 00:22:33.200 Could not set queue depth (nvme2n1) 00:22:33.200 Could not set queue depth (nvme3n1) 00:22:33.200 Could not set queue depth (nvme4n1) 00:22:33.200 Could not set queue depth (nvme5n1) 00:22:33.200 Could not set queue depth (nvme6n1) 00:22:33.200 Could not set queue depth (nvme7n1) 00:22:33.200 Could not set queue depth (nvme8n1) 00:22:33.200 Could not set queue depth (nvme9n1) 00:22:33.200 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:33.200 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:33.200 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:33.200 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:33.200 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:33.200 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:33.200 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:33.200 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:33.200 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:33.200 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:33.200 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:33.200 fio-3.35 00:22:33.200 Starting 11 threads 00:22:43.209 00:22:43.209 job0: (groupid=0, jobs=1): err= 0: pid=35183: Wed Apr 17 06:48:46 2024 00:22:43.209 write: IOPS=357, BW=89.3MiB/s (93.6MB/s)(906MiB/10145msec); 0 zone resets 00:22:43.209 slat (usec): min=24, max=428611, avg=2360.91, stdev=9996.71 00:22:43.209 clat (msec): min=3, max=1173, avg=176.77, stdev=147.31 00:22:43.209 lat (msec): min=3, max=1173, avg=179.13, stdev=149.24 00:22:43.209 clat percentiles (msec): 00:22:43.209 | 1.00th=[ 10], 5.00th=[ 37], 10.00th=[ 50], 20.00th=[ 82], 00:22:43.209 | 30.00th=[ 111], 40.00th=[ 136], 50.00th=[ 157], 60.00th=[ 182], 00:22:43.209 | 70.00th=[ 209], 80.00th=[ 230], 90.00th=[ 292], 95.00th=[ 342], 00:22:43.209 | 99.00th=[ 1062], 99.50th=[ 1099], 99.90th=[ 1167], 99.95th=[ 1167], 00:22:43.209 | 99.99th=[ 1167] 00:22:43.209 bw ( KiB/s): min= 4096, max=222208, per=6.73%, avg=91136.00, stdev=52326.93, samples=20 00:22:43.209 iops : min= 16, max= 868, avg=356.00, stdev=204.40, samples=20 00:22:43.209 lat (msec) : 4=0.08%, 10=0.94%, 20=1.77%, 50=7.54%, 100=14.57% 00:22:43.209 lat (msec) : 250=59.92%, 500=13.22%, 750=0.22%, 1000=0.22%, 2000=1.52% 00:22:43.209 cpu : usr=1.10%, sys=0.94%, ctx=1611, majf=0, minf=1 00:22:43.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:22:43.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:43.209 issued rwts: total=0,3623,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.209 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:43.209 job1: (groupid=0, jobs=1): err= 0: pid=35194: Wed Apr 17 06:48:46 2024 00:22:43.209 write: IOPS=576, BW=144MiB/s (151MB/s)(1461MiB/10144msec); 0 zone resets 00:22:43.209 slat (usec): min=19, max=146525, avg=1196.68, stdev=3757.96 00:22:43.209 clat (usec): min=1411, max=360823, avg=109826.70, stdev=59903.36 00:22:43.209 lat (usec): min=1451, max=360856, avg=111023.37, stdev=60485.59 00:22:43.209 clat percentiles (msec): 00:22:43.209 | 1.00th=[ 8], 5.00th=[ 27], 10.00th=[ 40], 20.00th=[ 51], 00:22:43.209 | 30.00th=[ 66], 40.00th=[ 91], 50.00th=[ 112], 60.00th=[ 127], 00:22:43.209 | 70.00th=[ 148], 80.00th=[ 153], 90.00th=[ 182], 95.00th=[ 203], 00:22:43.209 | 99.00th=[ 292], 99.50th=[ 309], 99.90th=[ 355], 99.95th=[ 355], 00:22:43.209 | 99.99th=[ 363] 00:22:43.209 bw ( KiB/s): min=97280, max=353792, per=10.93%, avg=148008.55, stdev=60095.52, samples=20 00:22:43.209 iops : min= 380, max= 1382, avg=578.15, stdev=234.75, samples=20 00:22:43.209 lat (msec) : 2=0.07%, 4=0.26%, 10=0.99%, 20=2.60%, 50=15.49% 00:22:43.209 lat (msec) : 100=24.14%, 250=53.82%, 500=2.64% 00:22:43.209 cpu : usr=1.65%, sys=2.06%, ctx=2977, majf=0, minf=1 00:22:43.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:22:43.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:43.209 issued rwts: total=0,5844,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.209 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:43.209 job2: (groupid=0, jobs=1): err= 0: pid=35196: Wed Apr 17 06:48:46 2024 00:22:43.209 write: IOPS=502, BW=126MiB/s (132MB/s)(1265MiB/10058msec); 0 zone resets 00:22:43.209 slat (usec): min=16, max=72171, avg=1410.13, stdev=3587.82 00:22:43.209 clat (usec): min=1995, max=281046, avg=125814.52, stdev=52155.54 00:22:43.209 lat (msec): min=2, max=284, avg=127.22, stdev=52.83 00:22:43.209 clat percentiles (msec): 00:22:43.209 | 1.00th=[ 12], 5.00th=[ 37], 10.00th=[ 61], 20.00th=[ 85], 00:22:43.209 | 30.00th=[ 96], 40.00th=[ 109], 50.00th=[ 127], 60.00th=[ 138], 00:22:43.209 | 70.00th=[ 148], 80.00th=[ 171], 90.00th=[ 197], 95.00th=[ 213], 00:22:43.209 | 99.00th=[ 245], 99.50th=[ 259], 99.90th=[ 275], 99.95th=[ 275], 00:22:43.209 | 99.99th=[ 284] 00:22:43.209 bw ( KiB/s): min=86016, max=168960, per=9.44%, avg=127884.20, stdev=28886.48, samples=20 00:22:43.209 iops : min= 336, max= 660, avg=499.50, stdev=112.85, samples=20 00:22:43.209 lat (msec) : 2=0.02%, 4=0.16%, 10=0.65%, 20=1.34%, 50=5.50% 00:22:43.209 lat (msec) : 100=26.35%, 250=65.20%, 500=0.77% 00:22:43.209 cpu : usr=1.42%, sys=1.92%, ctx=2707, majf=0, minf=1 00:22:43.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:43.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:43.210 issued rwts: total=0,5058,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.210 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:43.210 job3: (groupid=0, jobs=1): err= 0: pid=35197: Wed Apr 17 06:48:46 2024 00:22:43.210 write: IOPS=417, BW=104MiB/s (109MB/s)(1059MiB/10142msec); 0 zone resets 00:22:43.210 slat (usec): min=15, max=109749, avg=1729.66, stdev=4651.95 00:22:43.210 clat (msec): min=13, max=390, avg=151.43, stdev=65.75 00:22:43.210 lat (msec): min=13, max=390, avg=153.16, stdev=66.62 00:22:43.210 clat percentiles (msec): 00:22:43.210 | 1.00th=[ 32], 5.00th=[ 54], 10.00th=[ 72], 20.00th=[ 107], 00:22:43.210 | 30.00th=[ 115], 40.00th=[ 132], 50.00th=[ 146], 60.00th=[ 155], 00:22:43.210 | 70.00th=[ 169], 80.00th=[ 194], 90.00th=[ 243], 95.00th=[ 279], 00:22:43.210 | 99.00th=[ 368], 99.50th=[ 384], 99.90th=[ 388], 99.95th=[ 393], 00:22:43.210 | 99.99th=[ 393] 00:22:43.210 bw ( KiB/s): min=38912, max=161280, per=7.89%, avg=106828.80, stdev=29212.72, samples=20 00:22:43.210 iops : min= 152, max= 630, avg=417.30, stdev=114.11, samples=20 00:22:43.210 lat (msec) : 20=0.12%, 50=3.71%, 100=13.67%, 250=73.16%, 500=9.35% 00:22:43.210 cpu : usr=1.21%, sys=1.64%, ctx=2170, majf=0, minf=1 00:22:43.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:22:43.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:43.210 issued rwts: total=0,4236,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.210 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:43.210 job4: (groupid=0, jobs=1): err= 0: pid=35198: Wed Apr 17 06:48:46 2024 00:22:43.210 write: IOPS=334, BW=83.5MiB/s (87.6MB/s)(847MiB/10145msec); 0 zone resets 00:22:43.210 slat (usec): min=24, max=886392, avg=2572.65, stdev=16152.13 00:22:43.210 clat (msec): min=4, max=1153, avg=188.92, stdev=140.68 00:22:43.210 lat (msec): min=4, max=1153, avg=191.49, stdev=141.89 00:22:43.210 clat percentiles (msec): 00:22:43.210 | 1.00th=[ 22], 5.00th=[ 57], 10.00th=[ 89], 20.00th=[ 120], 00:22:43.210 | 30.00th=[ 130], 40.00th=[ 142], 50.00th=[ 167], 60.00th=[ 192], 00:22:43.210 | 70.00th=[ 213], 80.00th=[ 230], 90.00th=[ 271], 95.00th=[ 342], 00:22:43.210 | 99.00th=[ 1053], 99.50th=[ 1099], 99.90th=[ 1133], 99.95th=[ 1150], 00:22:43.210 | 99.99th=[ 1150] 00:22:43.210 bw ( KiB/s): min=34816, max=141824, per=6.62%, avg=89638.89, stdev=29017.04, samples=19 00:22:43.210 iops : min= 136, max= 554, avg=350.11, stdev=113.31, samples=19 00:22:43.210 lat (msec) : 10=0.21%, 20=0.65%, 50=3.19%, 100=7.88%, 250=74.54% 00:22:43.210 lat (msec) : 500=11.68%, 1000=0.12%, 2000=1.74% 00:22:43.210 cpu : usr=1.17%, sys=0.97%, ctx=1474, majf=0, minf=1 00:22:43.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:22:43.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:43.210 issued rwts: total=0,3389,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.210 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:43.210 job5: (groupid=0, jobs=1): err= 0: pid=35199: Wed Apr 17 06:48:46 2024 00:22:43.210 write: IOPS=399, BW=99.9MiB/s (105MB/s)(1014MiB/10144msec); 0 zone resets 00:22:43.210 slat (usec): min=24, max=51468, avg=2104.52, stdev=4452.42 00:22:43.210 clat (msec): min=4, max=334, avg=157.95, stdev=54.37 00:22:43.210 lat (msec): min=4, max=334, avg=160.06, stdev=55.02 00:22:43.210 clat percentiles (msec): 00:22:43.210 | 1.00th=[ 12], 5.00th=[ 52], 10.00th=[ 89], 20.00th=[ 134], 00:22:43.210 | 30.00th=[ 142], 40.00th=[ 148], 50.00th=[ 153], 60.00th=[ 163], 00:22:43.210 | 70.00th=[ 176], 80.00th=[ 194], 90.00th=[ 218], 95.00th=[ 257], 00:22:43.210 | 99.00th=[ 313], 99.50th=[ 330], 99.90th=[ 334], 99.95th=[ 334], 00:22:43.210 | 99.99th=[ 334] 00:22:43.210 bw ( KiB/s): min=67719, max=140800, per=7.55%, avg=102176.35, stdev=20195.55, samples=20 00:22:43.210 iops : min= 264, max= 550, avg=399.10, stdev=78.94, samples=20 00:22:43.210 lat (msec) : 10=0.72%, 20=1.13%, 50=3.01%, 100=6.88%, 250=82.51% 00:22:43.210 lat (msec) : 500=5.75% 00:22:43.210 cpu : usr=1.22%, sys=1.40%, ctx=1669, majf=0, minf=1 00:22:43.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:22:43.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:43.210 issued rwts: total=0,4054,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.210 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:43.210 job6: (groupid=0, jobs=1): err= 0: pid=35200: Wed Apr 17 06:48:46 2024 00:22:43.210 write: IOPS=411, BW=103MiB/s (108MB/s)(1042MiB/10138msec); 0 zone resets 00:22:43.210 slat (usec): min=17, max=352488, avg=1747.95, stdev=8100.99 00:22:43.210 clat (usec): min=1472, max=1130.3k, avg=153885.30, stdev=131396.61 00:22:43.210 lat (usec): min=1504, max=1140.7k, avg=155633.25, stdev=132997.00 00:22:43.210 clat percentiles (msec): 00:22:43.210 | 1.00th=[ 9], 5.00th=[ 24], 10.00th=[ 44], 20.00th=[ 83], 00:22:43.210 | 30.00th=[ 115], 40.00th=[ 138], 50.00th=[ 148], 60.00th=[ 153], 00:22:43.210 | 70.00th=[ 165], 80.00th=[ 190], 90.00th=[ 232], 95.00th=[ 264], 00:22:43.210 | 99.00th=[ 1070], 99.50th=[ 1099], 99.90th=[ 1116], 99.95th=[ 1133], 00:22:43.210 | 99.99th=[ 1133] 00:22:43.210 bw ( KiB/s): min= 4096, max=193024, per=7.76%, avg=105062.40, stdev=44056.93, samples=20 00:22:43.210 iops : min= 16, max= 754, avg=410.40, stdev=172.10, samples=20 00:22:43.210 lat (msec) : 2=0.07%, 4=0.24%, 10=1.03%, 20=2.71%, 50=7.90% 00:22:43.210 lat (msec) : 100=11.21%, 250=69.83%, 500=5.30%, 750=0.17%, 1000=0.22% 00:22:43.210 lat (msec) : 2000=1.32% 00:22:43.210 cpu : usr=1.14%, sys=1.45%, ctx=2230, majf=0, minf=1 00:22:43.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:22:43.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:43.210 issued rwts: total=0,4167,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.210 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:43.210 job7: (groupid=0, jobs=1): err= 0: pid=35201: Wed Apr 17 06:48:46 2024 00:22:43.210 write: IOPS=642, BW=161MiB/s (168MB/s)(1615MiB/10062msec); 0 zone resets 00:22:43.210 slat (usec): min=18, max=95546, avg=841.42, stdev=2893.74 00:22:43.210 clat (usec): min=1548, max=1092.5k, avg=98773.34, stdev=96314.70 00:22:43.210 lat (usec): min=1605, max=1092.6k, avg=99614.75, stdev=96644.31 00:22:43.210 clat percentiles (msec): 00:22:43.210 | 1.00th=[ 11], 5.00th=[ 17], 10.00th=[ 20], 20.00th=[ 37], 00:22:43.210 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 88], 60.00th=[ 101], 00:22:43.210 | 70.00th=[ 124], 80.00th=[ 140], 90.00th=[ 186], 95.00th=[ 220], 00:22:43.210 | 99.00th=[ 300], 99.50th=[ 986], 99.90th=[ 1070], 99.95th=[ 1083], 00:22:43.210 | 99.99th=[ 1099] 00:22:43.210 bw ( KiB/s): min=83456, max=307200, per=12.09%, avg=163788.80, stdev=61823.01, samples=20 00:22:43.210 iops : min= 326, max= 1200, avg=639.80, stdev=241.50, samples=20 00:22:43.210 lat (msec) : 2=0.03%, 4=0.17%, 10=0.74%, 20=10.46%, 50=17.38% 00:22:43.210 lat (msec) : 100=31.20%, 250=37.25%, 500=2.01%, 750=0.12%, 1000=0.17% 00:22:43.210 lat (msec) : 2000=0.45% 00:22:43.210 cpu : usr=2.06%, sys=2.46%, ctx=4316, majf=0, minf=1 00:22:43.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:22:43.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:43.210 issued rwts: total=0,6461,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.210 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:43.210 job8: (groupid=0, jobs=1): err= 0: pid=35202: Wed Apr 17 06:48:46 2024 00:22:43.210 write: IOPS=890, BW=223MiB/s (233MB/s)(2242MiB/10072msec); 0 zone resets 00:22:43.210 slat (usec): min=22, max=25254, avg=898.61, stdev=2192.77 00:22:43.210 clat (msec): min=2, max=315, avg=70.95, stdev=48.57 00:22:43.210 lat (msec): min=3, max=319, avg=71.85, stdev=49.09 00:22:43.210 clat percentiles (msec): 00:22:43.210 | 1.00th=[ 15], 5.00th=[ 40], 10.00th=[ 42], 20.00th=[ 43], 00:22:43.210 | 30.00th=[ 43], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 55], 00:22:43.210 | 70.00th=[ 71], 80.00th=[ 100], 90.00th=[ 142], 95.00th=[ 182], 00:22:43.210 | 99.00th=[ 228], 99.50th=[ 292], 99.90th=[ 305], 99.95th=[ 309], 00:22:43.210 | 99.99th=[ 317] 00:22:43.210 bw ( KiB/s): min=83968, max=377856, per=16.83%, avg=227980.55, stdev=111640.94, samples=20 00:22:43.210 iops : min= 328, max= 1476, avg=890.50, stdev=436.14, samples=20 00:22:43.210 lat (msec) : 4=0.04%, 10=0.37%, 20=1.00%, 50=55.24%, 100=23.61% 00:22:43.210 lat (msec) : 250=18.87%, 500=0.87% 00:22:43.210 cpu : usr=2.78%, sys=2.82%, ctx=3326, majf=0, minf=1 00:22:43.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:22:43.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:43.210 issued rwts: total=0,8968,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.210 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:43.210 job9: (groupid=0, jobs=1): err= 0: pid=35203: Wed Apr 17 06:48:46 2024 00:22:43.210 write: IOPS=371, BW=92.8MiB/s (97.3MB/s)(942MiB/10145msec); 0 zone resets 00:22:43.210 slat (usec): min=19, max=293454, avg=1827.01, stdev=7674.86 00:22:43.210 clat (msec): min=2, max=1077, avg=170.44, stdev=128.24 00:22:43.210 lat (msec): min=3, max=1077, avg=172.27, stdev=129.23 00:22:43.210 clat percentiles (msec): 00:22:43.210 | 1.00th=[ 8], 5.00th=[ 16], 10.00th=[ 27], 20.00th=[ 72], 00:22:43.210 | 30.00th=[ 99], 40.00th=[ 138], 50.00th=[ 163], 60.00th=[ 194], 00:22:43.210 | 70.00th=[ 224], 80.00th=[ 243], 90.00th=[ 284], 95.00th=[ 326], 00:22:43.210 | 99.00th=[ 684], 99.50th=[ 1020], 99.90th=[ 1053], 99.95th=[ 1083], 00:22:43.210 | 99.99th=[ 1083] 00:22:43.210 bw ( KiB/s): min=40960, max=236544, per=7.00%, avg=94822.40, stdev=45373.23, samples=20 00:22:43.210 iops : min= 160, max= 924, avg=370.40, stdev=177.24, samples=20 00:22:43.210 lat (msec) : 4=0.16%, 10=1.51%, 20=5.63%, 50=8.79%, 100=14.57% 00:22:43.210 lat (msec) : 250=51.95%, 500=15.72%, 750=0.85%, 1000=0.24%, 2000=0.58% 00:22:43.210 cpu : usr=1.13%, sys=1.32%, ctx=2333, majf=0, minf=1 00:22:43.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:22:43.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:43.210 issued rwts: total=0,3767,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.210 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:43.210 job10: (groupid=0, jobs=1): err= 0: pid=35204: Wed Apr 17 06:48:46 2024 00:22:43.211 write: IOPS=405, BW=101MiB/s (106MB/s)(1030MiB/10149msec); 0 zone resets 00:22:43.211 slat (usec): min=20, max=702773, avg=1784.77, stdev=12160.83 00:22:43.211 clat (msec): min=2, max=1050, avg=155.72, stdev=138.71 00:22:43.211 lat (msec): min=2, max=1050, avg=157.50, stdev=139.97 00:22:43.211 clat percentiles (msec): 00:22:43.211 | 1.00th=[ 10], 5.00th=[ 22], 10.00th=[ 33], 20.00th=[ 53], 00:22:43.211 | 30.00th=[ 62], 40.00th=[ 101], 50.00th=[ 128], 60.00th=[ 178], 00:22:43.211 | 70.00th=[ 213], 80.00th=[ 232], 90.00th=[ 271], 95.00th=[ 326], 00:22:43.211 | 99.00th=[ 953], 99.50th=[ 1003], 99.90th=[ 1036], 99.95th=[ 1045], 00:22:43.211 | 99.99th=[ 1053] 00:22:43.211 bw ( KiB/s): min=43008, max=201728, per=8.07%, avg=109336.74, stdev=50906.91, samples=19 00:22:43.211 iops : min= 168, max= 788, avg=427.05, stdev=198.86, samples=19 00:22:43.211 lat (msec) : 4=0.22%, 10=1.02%, 20=2.82%, 50=13.83%, 100=22.06% 00:22:43.211 lat (msec) : 250=44.49%, 500=14.03%, 1000=1.02%, 2000=0.51% 00:22:43.211 cpu : usr=1.21%, sys=1.34%, ctx=2424, majf=0, minf=1 00:22:43.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:22:43.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:43.211 issued rwts: total=0,4120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.211 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:43.211 00:22:43.211 Run status group 0 (all jobs): 00:22:43.211 WRITE: bw=1322MiB/s (1387MB/s), 83.5MiB/s-223MiB/s (87.6MB/s-233MB/s), io=13.1GiB (14.1GB), run=10058-10149msec 00:22:43.211 00:22:43.211 Disk stats (read/write): 00:22:43.211 nvme0n1: ios=49/7059, merge=0/0, ticks=81/1200495, in_queue=1200576, util=97.66% 00:22:43.211 nvme10n1: ios=53/11517, merge=0/0, ticks=2903/1199944, in_queue=1202847, util=100.00% 00:22:43.211 nvme1n1: ios=5/9874, merge=0/0, ticks=15/1221705, in_queue=1221720, util=97.58% 00:22:43.211 nvme2n1: ios=0/8306, merge=0/0, ticks=0/1216794, in_queue=1216794, util=97.76% 00:22:43.211 nvme3n1: ios=0/6620, merge=0/0, ticks=0/1208722, in_queue=1208722, util=97.82% 00:22:43.211 nvme4n1: ios=0/7935, merge=0/0, ticks=0/1211281, in_queue=1211281, util=98.16% 00:22:43.211 nvme5n1: ios=0/8165, merge=0/0, ticks=0/1215308, in_queue=1215308, util=98.26% 00:22:43.211 nvme6n1: ios=46/12584, merge=0/0, ticks=2404/1223147, in_queue=1225551, util=100.00% 00:22:43.211 nvme7n1: ios=0/17668, merge=0/0, ticks=0/1219586, in_queue=1219586, util=98.81% 00:22:43.211 nvme8n1: ios=47/7365, merge=0/0, ticks=2731/1199328, in_queue=1202059, util=100.00% 00:22:43.211 nvme9n1: ios=44/8070, merge=0/0, ticks=1785/1213059, in_queue=1214844, util=100.00% 00:22:43.211 06:48:46 -- target/multiconnection.sh@36 -- # sync 00:22:43.211 06:48:46 -- target/multiconnection.sh@37 -- # seq 1 11 00:22:43.211 06:48:46 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:43.211 06:48:46 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:43.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:43.211 06:48:47 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:22:43.211 06:48:47 -- common/autotest_common.sh@1205 -- # local i=0 00:22:43.211 06:48:47 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:22:43.211 06:48:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:22:43.211 06:48:47 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:22:43.211 06:48:47 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK1 00:22:43.211 06:48:47 -- common/autotest_common.sh@1217 -- # return 0 00:22:43.211 06:48:47 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:43.211 06:48:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.211 06:48:47 -- common/autotest_common.sh@10 -- # set +x 00:22:43.211 06:48:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.211 06:48:47 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:43.211 06:48:47 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:22:43.211 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:22:43.211 06:48:47 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:22:43.211 06:48:47 -- common/autotest_common.sh@1205 -- # local i=0 00:22:43.211 06:48:47 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:22:43.211 06:48:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:22:43.211 06:48:47 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:22:43.211 06:48:47 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK2 00:22:43.211 06:48:47 -- common/autotest_common.sh@1217 -- # return 0 00:22:43.211 06:48:47 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:43.211 06:48:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.211 06:48:47 -- common/autotest_common.sh@10 -- # set +x 00:22:43.211 06:48:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.211 06:48:47 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:43.211 06:48:47 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:22:43.211 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:22:43.211 06:48:47 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:22:43.211 06:48:47 -- common/autotest_common.sh@1205 -- # local i=0 00:22:43.211 06:48:47 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:22:43.211 06:48:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:22:43.211 06:48:47 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:22:43.211 06:48:47 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK3 00:22:43.211 06:48:47 -- common/autotest_common.sh@1217 -- # return 0 00:22:43.211 06:48:47 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:43.211 06:48:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.211 06:48:47 -- common/autotest_common.sh@10 -- # set +x 00:22:43.211 06:48:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.211 06:48:47 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:43.211 06:48:47 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:22:43.469 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:22:43.469 06:48:47 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:22:43.469 06:48:47 -- common/autotest_common.sh@1205 -- # local i=0 00:22:43.469 06:48:47 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:22:43.469 06:48:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:22:43.469 06:48:47 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:22:43.469 06:48:47 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK4 00:22:43.469 06:48:47 -- common/autotest_common.sh@1217 -- # return 0 00:22:43.469 06:48:47 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:22:43.469 06:48:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.469 06:48:47 -- common/autotest_common.sh@10 -- # set +x 00:22:43.469 06:48:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.469 06:48:47 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:43.469 06:48:47 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:22:43.727 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:22:43.727 06:48:48 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:22:43.727 06:48:48 -- common/autotest_common.sh@1205 -- # local i=0 00:22:43.727 06:48:48 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:22:43.727 06:48:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:22:43.727 06:48:48 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:22:43.727 06:48:48 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK5 00:22:43.727 06:48:48 -- common/autotest_common.sh@1217 -- # return 0 00:22:43.727 06:48:48 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:22:43.727 06:48:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.727 06:48:48 -- common/autotest_common.sh@10 -- # set +x 00:22:43.727 06:48:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.727 06:48:48 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:43.727 06:48:48 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:22:43.985 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:22:43.985 06:48:48 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:22:43.985 06:48:48 -- common/autotest_common.sh@1205 -- # local i=0 00:22:43.985 06:48:48 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:22:43.985 06:48:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:22:43.985 06:48:48 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:22:43.985 06:48:48 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK6 00:22:43.985 06:48:48 -- common/autotest_common.sh@1217 -- # return 0 00:22:43.985 06:48:48 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:22:43.985 06:48:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.985 06:48:48 -- common/autotest_common.sh@10 -- # set +x 00:22:43.985 06:48:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.985 06:48:48 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:43.985 06:48:48 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:22:44.244 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:22:44.244 06:48:48 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:22:44.244 06:48:48 -- common/autotest_common.sh@1205 -- # local i=0 00:22:44.244 06:48:48 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:22:44.244 06:48:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:22:44.244 06:48:48 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:22:44.244 06:48:48 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK7 00:22:44.244 06:48:48 -- common/autotest_common.sh@1217 -- # return 0 00:22:44.244 06:48:48 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:22:44.244 06:48:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:44.244 06:48:48 -- common/autotest_common.sh@10 -- # set +x 00:22:44.244 06:48:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.244 06:48:48 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:44.244 06:48:48 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:22:44.244 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:22:44.244 06:48:48 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:22:44.244 06:48:48 -- common/autotest_common.sh@1205 -- # local i=0 00:22:44.244 06:48:48 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:22:44.244 06:48:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:22:44.244 06:48:48 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:22:44.244 06:48:48 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK8 00:22:44.502 06:48:48 -- common/autotest_common.sh@1217 -- # return 0 00:22:44.502 06:48:48 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:22:44.502 06:48:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:44.502 06:48:48 -- common/autotest_common.sh@10 -- # set +x 00:22:44.502 06:48:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.502 06:48:48 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:44.502 06:48:48 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:22:44.502 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:22:44.502 06:48:48 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:22:44.502 06:48:48 -- common/autotest_common.sh@1205 -- # local i=0 00:22:44.502 06:48:48 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:22:44.502 06:48:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:22:44.502 06:48:48 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:22:44.502 06:48:48 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK9 00:22:44.502 06:48:48 -- common/autotest_common.sh@1217 -- # return 0 00:22:44.502 06:48:48 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:22:44.502 06:48:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:44.502 06:48:48 -- common/autotest_common.sh@10 -- # set +x 00:22:44.502 06:48:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.502 06:48:48 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:44.502 06:48:48 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:22:44.502 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:22:44.502 06:48:49 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:22:44.502 06:48:49 -- common/autotest_common.sh@1205 -- # local i=0 00:22:44.502 06:48:49 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:22:44.502 06:48:49 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:22:44.502 06:48:49 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:22:44.502 06:48:49 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK10 00:22:44.502 06:48:49 -- common/autotest_common.sh@1217 -- # return 0 00:22:44.502 06:48:49 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:22:44.502 06:48:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:44.503 06:48:49 -- common/autotest_common.sh@10 -- # set +x 00:22:44.503 06:48:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.503 06:48:49 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:44.503 06:48:49 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:22:44.761 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:22:44.761 06:48:49 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:22:44.761 06:48:49 -- common/autotest_common.sh@1205 -- # local i=0 00:22:44.761 06:48:49 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:22:44.761 06:48:49 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:22:44.761 06:48:49 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:22:44.761 06:48:49 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK11 00:22:44.761 06:48:49 -- common/autotest_common.sh@1217 -- # return 0 00:22:44.761 06:48:49 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:22:44.761 06:48:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:44.761 06:48:49 -- common/autotest_common.sh@10 -- # set +x 00:22:44.761 06:48:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.761 06:48:49 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:22:44.761 06:48:49 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:22:44.761 06:48:49 -- target/multiconnection.sh@47 -- # nvmftestfini 00:22:44.761 06:48:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:44.761 06:48:49 -- nvmf/common.sh@117 -- # sync 00:22:44.761 06:48:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:44.761 06:48:49 -- nvmf/common.sh@120 -- # set +e 00:22:44.761 06:48:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:44.761 06:48:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:44.761 rmmod nvme_tcp 00:22:44.761 rmmod nvme_fabrics 00:22:44.761 rmmod nvme_keyring 00:22:44.761 06:48:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:44.761 06:48:49 -- nvmf/common.sh@124 -- # set -e 00:22:44.761 06:48:49 -- nvmf/common.sh@125 -- # return 0 00:22:44.761 06:48:49 -- nvmf/common.sh@478 -- # '[' -n 29264 ']' 00:22:44.761 06:48:49 -- nvmf/common.sh@479 -- # killprocess 29264 00:22:44.761 06:48:49 -- common/autotest_common.sh@936 -- # '[' -z 29264 ']' 00:22:44.761 06:48:49 -- common/autotest_common.sh@940 -- # kill -0 29264 00:22:44.761 06:48:49 -- common/autotest_common.sh@941 -- # uname 00:22:44.761 06:48:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:44.761 06:48:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 29264 00:22:44.761 06:48:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:44.761 06:48:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:44.761 06:48:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 29264' 00:22:44.761 killing process with pid 29264 00:22:44.761 06:48:49 -- common/autotest_common.sh@955 -- # kill 29264 00:22:44.761 06:48:49 -- common/autotest_common.sh@960 -- # wait 29264 00:22:45.329 06:48:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:45.329 06:48:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:45.329 06:48:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:45.329 06:48:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:45.329 06:48:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:45.329 06:48:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.329 06:48:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:45.329 06:48:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.232 06:48:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:47.232 00:22:47.232 real 0m59.973s 00:22:47.232 user 3m13.841s 00:22:47.232 sys 0m24.713s 00:22:47.232 06:48:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:47.232 06:48:51 -- common/autotest_common.sh@10 -- # set +x 00:22:47.232 ************************************ 00:22:47.232 END TEST nvmf_multiconnection 00:22:47.232 ************************************ 00:22:47.232 06:48:51 -- nvmf/nvmf.sh@67 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:47.232 06:48:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:47.232 06:48:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:47.232 06:48:51 -- common/autotest_common.sh@10 -- # set +x 00:22:47.491 ************************************ 00:22:47.491 START TEST nvmf_initiator_timeout 00:22:47.491 ************************************ 00:22:47.491 06:48:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:47.491 * Looking for test storage... 00:22:47.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:47.491 06:48:51 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:47.491 06:48:51 -- nvmf/common.sh@7 -- # uname -s 00:22:47.491 06:48:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:47.491 06:48:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:47.491 06:48:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:47.491 06:48:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:47.491 06:48:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:47.491 06:48:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:47.491 06:48:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:47.491 06:48:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:47.491 06:48:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:47.491 06:48:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:47.491 06:48:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:47.491 06:48:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:47.491 06:48:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:47.491 06:48:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:47.491 06:48:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:47.491 06:48:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:47.491 06:48:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:47.491 06:48:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:47.491 06:48:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.491 06:48:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.491 06:48:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.491 06:48:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.491 06:48:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.491 06:48:51 -- paths/export.sh@5 -- # export PATH 00:22:47.491 06:48:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.491 06:48:51 -- nvmf/common.sh@47 -- # : 0 00:22:47.491 06:48:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:47.491 06:48:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:47.491 06:48:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:47.491 06:48:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:47.491 06:48:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:47.491 06:48:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:47.491 06:48:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:47.491 06:48:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:47.491 06:48:51 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:47.491 06:48:51 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:47.491 06:48:51 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:22:47.491 06:48:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:47.491 06:48:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.491 06:48:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:47.491 06:48:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:47.491 06:48:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:47.491 06:48:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.491 06:48:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:47.491 06:48:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.491 06:48:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:47.491 06:48:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:47.491 06:48:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:47.491 06:48:51 -- common/autotest_common.sh@10 -- # set +x 00:22:49.392 06:48:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:49.392 06:48:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:49.392 06:48:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:49.392 06:48:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:49.392 06:48:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:49.392 06:48:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:49.392 06:48:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:49.392 06:48:53 -- nvmf/common.sh@295 -- # net_devs=() 00:22:49.392 06:48:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:49.392 06:48:53 -- nvmf/common.sh@296 -- # e810=() 00:22:49.392 06:48:53 -- nvmf/common.sh@296 -- # local -ga e810 00:22:49.392 06:48:53 -- nvmf/common.sh@297 -- # x722=() 00:22:49.392 06:48:53 -- nvmf/common.sh@297 -- # local -ga x722 00:22:49.392 06:48:53 -- nvmf/common.sh@298 -- # mlx=() 00:22:49.392 06:48:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:49.392 06:48:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:49.392 06:48:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:49.392 06:48:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:49.392 06:48:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:49.392 06:48:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:49.392 06:48:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:49.392 06:48:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:49.392 06:48:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:49.392 06:48:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:49.392 06:48:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:49.392 06:48:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:49.392 06:48:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:49.392 06:48:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:49.392 06:48:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:49.392 06:48:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:49.392 06:48:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:49.392 06:48:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:49.392 06:48:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:49.392 06:48:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:49.392 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:49.392 06:48:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:49.392 06:48:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:49.392 06:48:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.392 06:48:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.392 06:48:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:49.392 06:48:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:49.393 06:48:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:49.393 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:49.393 06:48:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:49.393 06:48:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:49.393 06:48:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.393 06:48:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.393 06:48:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:49.393 06:48:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:49.393 06:48:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:49.393 06:48:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:49.393 06:48:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:49.393 06:48:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.393 06:48:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:49.393 06:48:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.393 06:48:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:49.393 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:49.393 06:48:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.393 06:48:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:49.393 06:48:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.393 06:48:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:49.393 06:48:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.393 06:48:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:49.393 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:49.393 06:48:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.393 06:48:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:49.393 06:48:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:49.393 06:48:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:49.393 06:48:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:49.393 06:48:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:49.393 06:48:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.393 06:48:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:49.393 06:48:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:49.393 06:48:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:49.393 06:48:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:49.393 06:48:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:49.393 06:48:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:49.393 06:48:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:49.393 06:48:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.393 06:48:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:49.393 06:48:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:49.652 06:48:54 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:49.652 06:48:54 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:49.652 06:48:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:49.652 06:48:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:49.652 06:48:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:49.652 06:48:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:49.652 06:48:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:49.652 06:48:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:49.652 06:48:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:49.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:22:49.652 00:22:49.652 --- 10.0.0.2 ping statistics --- 00:22:49.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.652 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:22:49.652 06:48:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:49.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:22:49.652 00:22:49.652 --- 10.0.0.1 ping statistics --- 00:22:49.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.652 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:22:49.652 06:48:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.652 06:48:54 -- nvmf/common.sh@411 -- # return 0 00:22:49.652 06:48:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:49.652 06:48:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.652 06:48:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:49.652 06:48:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:49.652 06:48:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.652 06:48:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:49.652 06:48:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:49.652 06:48:54 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:22:49.652 06:48:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:49.652 06:48:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:49.652 06:48:54 -- common/autotest_common.sh@10 -- # set +x 00:22:49.652 06:48:54 -- nvmf/common.sh@470 -- # nvmfpid=38407 00:22:49.652 06:48:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:49.652 06:48:54 -- nvmf/common.sh@471 -- # waitforlisten 38407 00:22:49.652 06:48:54 -- common/autotest_common.sh@817 -- # '[' -z 38407 ']' 00:22:49.652 06:48:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.652 06:48:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:49.652 06:48:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.652 06:48:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:49.652 06:48:54 -- common/autotest_common.sh@10 -- # set +x 00:22:49.652 [2024-04-17 06:48:54.191613] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:22:49.652 [2024-04-17 06:48:54.191688] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.652 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.910 [2024-04-17 06:48:54.262059] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:49.910 [2024-04-17 06:48:54.356865] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.910 [2024-04-17 06:48:54.356922] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.910 [2024-04-17 06:48:54.356951] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.910 [2024-04-17 06:48:54.356962] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.910 [2024-04-17 06:48:54.356979] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.910 [2024-04-17 06:48:54.357053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.910 [2024-04-17 06:48:54.357133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.910 [2024-04-17 06:48:54.357227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:49.910 [2024-04-17 06:48:54.357231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.910 06:48:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:49.910 06:48:54 -- common/autotest_common.sh@850 -- # return 0 00:22:49.910 06:48:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:49.910 06:48:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:49.910 06:48:54 -- common/autotest_common.sh@10 -- # set +x 00:22:49.910 06:48:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.910 06:48:54 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:49.910 06:48:54 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:49.910 06:48:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.910 06:48:54 -- common/autotest_common.sh@10 -- # set +x 00:22:50.168 Malloc0 00:22:50.168 06:48:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.168 06:48:54 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:22:50.168 06:48:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.168 06:48:54 -- common/autotest_common.sh@10 -- # set +x 00:22:50.168 Delay0 00:22:50.168 06:48:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.168 06:48:54 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:50.168 06:48:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.168 06:48:54 -- common/autotest_common.sh@10 -- # set +x 00:22:50.168 [2024-04-17 06:48:54.540584] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.168 06:48:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.168 06:48:54 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:50.168 06:48:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.168 06:48:54 -- common/autotest_common.sh@10 -- # set +x 00:22:50.168 06:48:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.168 06:48:54 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:50.168 06:48:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.168 06:48:54 -- common/autotest_common.sh@10 -- # set +x 00:22:50.168 06:48:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.168 06:48:54 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:50.168 06:48:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.168 06:48:54 -- common/autotest_common.sh@10 -- # set +x 00:22:50.168 [2024-04-17 06:48:54.568789] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.168 06:48:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.168 06:48:54 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:50.734 06:48:55 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:22:50.734 06:48:55 -- common/autotest_common.sh@1184 -- # local i=0 00:22:50.734 06:48:55 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:50.734 06:48:55 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:50.734 06:48:55 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:52.667 06:48:57 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:52.667 06:48:57 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:52.667 06:48:57 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:22:52.667 06:48:57 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:52.667 06:48:57 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:52.667 06:48:57 -- common/autotest_common.sh@1194 -- # return 0 00:22:52.667 06:48:57 -- target/initiator_timeout.sh@35 -- # fio_pid=38829 00:22:52.667 06:48:57 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:22:52.667 06:48:57 -- target/initiator_timeout.sh@37 -- # sleep 3 00:22:52.667 [global] 00:22:52.667 thread=1 00:22:52.667 invalidate=1 00:22:52.667 rw=write 00:22:52.667 time_based=1 00:22:52.667 runtime=60 00:22:52.667 ioengine=libaio 00:22:52.667 direct=1 00:22:52.667 bs=4096 00:22:52.667 iodepth=1 00:22:52.667 norandommap=0 00:22:52.667 numjobs=1 00:22:52.667 00:22:52.667 verify_dump=1 00:22:52.667 verify_backlog=512 00:22:52.667 verify_state_save=0 00:22:52.667 do_verify=1 00:22:52.667 verify=crc32c-intel 00:22:52.667 [job0] 00:22:52.667 filename=/dev/nvme0n1 00:22:52.667 Could not set queue depth (nvme0n1) 00:22:52.925 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:52.925 fio-3.35 00:22:52.925 Starting 1 thread 00:22:56.208 06:49:00 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:22:56.208 06:49:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:56.208 06:49:00 -- common/autotest_common.sh@10 -- # set +x 00:22:56.208 true 00:22:56.208 06:49:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:56.208 06:49:00 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:22:56.208 06:49:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:56.208 06:49:00 -- common/autotest_common.sh@10 -- # set +x 00:22:56.208 true 00:22:56.208 06:49:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:56.208 06:49:00 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:22:56.208 06:49:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:56.208 06:49:00 -- common/autotest_common.sh@10 -- # set +x 00:22:56.208 true 00:22:56.208 06:49:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:56.208 06:49:00 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:22:56.208 06:49:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:56.208 06:49:00 -- common/autotest_common.sh@10 -- # set +x 00:22:56.208 true 00:22:56.208 06:49:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:56.208 06:49:00 -- target/initiator_timeout.sh@45 -- # sleep 3 00:22:58.734 06:49:03 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:22:58.734 06:49:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.734 06:49:03 -- common/autotest_common.sh@10 -- # set +x 00:22:58.734 true 00:22:58.734 06:49:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.734 06:49:03 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:22:58.734 06:49:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.734 06:49:03 -- common/autotest_common.sh@10 -- # set +x 00:22:58.734 true 00:22:58.734 06:49:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.734 06:49:03 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:22:58.734 06:49:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.734 06:49:03 -- common/autotest_common.sh@10 -- # set +x 00:22:58.734 true 00:22:58.734 06:49:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.734 06:49:03 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:22:58.734 06:49:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.734 06:49:03 -- common/autotest_common.sh@10 -- # set +x 00:22:58.734 true 00:22:58.734 06:49:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.734 06:49:03 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:22:58.734 06:49:03 -- target/initiator_timeout.sh@54 -- # wait 38829 00:23:54.939 00:23:54.939 job0: (groupid=0, jobs=1): err= 0: pid=38902: Wed Apr 17 06:49:57 2024 00:23:54.939 read: IOPS=24, BW=97.6KiB/s (100.0kB/s)(5860KiB/60025msec) 00:23:54.939 slat (usec): min=5, max=7865, avg=22.14, stdev=205.33 00:23:54.939 clat (usec): min=347, max=41231k, avg=40530.54, stdev=1077056.12 00:23:54.939 lat (usec): min=353, max=41231k, avg=40552.68, stdev=1077056.10 00:23:54.939 clat percentiles (usec): 00:23:54.939 | 1.00th=[ 355], 5.00th=[ 367], 10.00th=[ 375], 00:23:54.939 | 20.00th=[ 388], 30.00th=[ 404], 40.00th=[ 429], 00:23:54.939 | 50.00th=[ 474], 60.00th=[ 545], 70.00th=[ 668], 00:23:54.939 | 80.00th=[ 41157], 90.00th=[ 41157], 95.00th=[ 41157], 00:23:54.939 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:23:54.939 | 99.95th=[17112761], 99.99th=[17112761] 00:23:54.939 write: IOPS=25, BW=102KiB/s (105kB/s)(6144KiB/60025msec); 0 zone resets 00:23:54.939 slat (nsec): min=6745, max=82101, avg=23050.77, stdev=12673.41 00:23:54.939 clat (usec): min=245, max=612, avg=365.94, stdev=52.38 00:23:54.939 lat (usec): min=256, max=641, avg=388.99, stdev=58.55 00:23:54.939 clat percentiles (usec): 00:23:54.939 | 1.00th=[ 265], 5.00th=[ 289], 10.00th=[ 302], 20.00th=[ 314], 00:23:54.939 | 30.00th=[ 330], 40.00th=[ 355], 50.00th=[ 371], 60.00th=[ 383], 00:23:54.939 | 70.00th=[ 392], 80.00th=[ 412], 90.00th=[ 437], 95.00th=[ 449], 00:23:54.939 | 99.00th=[ 478], 99.50th=[ 490], 99.90th=[ 529], 99.95th=[ 611], 00:23:54.939 | 99.99th=[ 611] 00:23:54.939 bw ( KiB/s): min= 2512, max= 5680, per=100.00%, avg=4096.00, stdev=1584.00, samples=3 00:23:54.939 iops : min= 628, max= 1420, avg=1024.00, stdev=396.00, samples=3 00:23:54.939 lat (usec) : 250=0.10%, 500=76.14%, 750=9.33% 00:23:54.939 lat (msec) : 2=0.03%, 50=14.36%, >=2000=0.03% 00:23:54.939 cpu : usr=0.06%, sys=0.14%, ctx=3002, majf=0, minf=2 00:23:54.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:54.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:54.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:54.939 issued rwts: total=1465,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:54.939 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:54.939 00:23:54.939 Run status group 0 (all jobs): 00:23:54.939 READ: bw=97.6KiB/s (100.0kB/s), 97.6KiB/s-97.6KiB/s (100.0kB/s-100.0kB/s), io=5860KiB (6001kB), run=60025-60025msec 00:23:54.939 WRITE: bw=102KiB/s (105kB/s), 102KiB/s-102KiB/s (105kB/s-105kB/s), io=6144KiB (6291kB), run=60025-60025msec 00:23:54.939 00:23:54.939 Disk stats (read/write): 00:23:54.939 nvme0n1: ios=1560/1536, merge=0/0, ticks=19279/509, in_queue=19788, util=99.90% 00:23:54.939 06:49:57 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:54.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:54.939 06:49:57 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:54.939 06:49:57 -- common/autotest_common.sh@1205 -- # local i=0 00:23:54.939 06:49:57 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:54.939 06:49:57 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:54.939 06:49:57 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:54.939 06:49:57 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:54.939 06:49:57 -- common/autotest_common.sh@1217 -- # return 0 00:23:54.939 06:49:57 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:23:54.939 06:49:57 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:23:54.939 nvmf hotplug test: fio successful as expected 00:23:54.939 06:49:57 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:54.939 06:49:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:54.939 06:49:57 -- common/autotest_common.sh@10 -- # set +x 00:23:54.939 06:49:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:54.939 06:49:57 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:23:54.939 06:49:57 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:23:54.939 06:49:57 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:23:54.939 06:49:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:54.939 06:49:57 -- nvmf/common.sh@117 -- # sync 00:23:54.939 06:49:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:54.939 06:49:57 -- nvmf/common.sh@120 -- # set +e 00:23:54.939 06:49:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:54.939 06:49:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:54.939 rmmod nvme_tcp 00:23:54.939 rmmod nvme_fabrics 00:23:54.939 rmmod nvme_keyring 00:23:54.939 06:49:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:54.939 06:49:57 -- nvmf/common.sh@124 -- # set -e 00:23:54.940 06:49:57 -- nvmf/common.sh@125 -- # return 0 00:23:54.940 06:49:57 -- nvmf/common.sh@478 -- # '[' -n 38407 ']' 00:23:54.940 06:49:57 -- nvmf/common.sh@479 -- # killprocess 38407 00:23:54.940 06:49:57 -- common/autotest_common.sh@936 -- # '[' -z 38407 ']' 00:23:54.940 06:49:57 -- common/autotest_common.sh@940 -- # kill -0 38407 00:23:54.940 06:49:57 -- common/autotest_common.sh@941 -- # uname 00:23:54.940 06:49:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:54.940 06:49:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 38407 00:23:54.940 06:49:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:54.940 06:49:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:54.940 06:49:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 38407' 00:23:54.940 killing process with pid 38407 00:23:54.940 06:49:57 -- common/autotest_common.sh@955 -- # kill 38407 00:23:54.940 06:49:57 -- common/autotest_common.sh@960 -- # wait 38407 00:23:54.940 06:49:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:54.940 06:49:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:54.940 06:49:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:54.940 06:49:58 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:54.940 06:49:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:54.940 06:49:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.940 06:49:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:54.940 06:49:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.507 06:50:00 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:55.507 00:23:55.507 real 1m8.178s 00:23:55.507 user 4m10.511s 00:23:55.507 sys 0m6.741s 00:23:55.507 06:50:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:55.507 06:50:00 -- common/autotest_common.sh@10 -- # set +x 00:23:55.507 ************************************ 00:23:55.507 END TEST nvmf_initiator_timeout 00:23:55.507 ************************************ 00:23:55.766 06:50:00 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:23:55.766 06:50:00 -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:23:55.766 06:50:00 -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:23:55.766 06:50:00 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:55.766 06:50:00 -- common/autotest_common.sh@10 -- # set +x 00:23:57.665 06:50:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:57.665 06:50:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:57.665 06:50:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:57.665 06:50:02 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:57.665 06:50:02 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:57.665 06:50:02 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:57.665 06:50:02 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:57.665 06:50:02 -- nvmf/common.sh@295 -- # net_devs=() 00:23:57.665 06:50:02 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:57.665 06:50:02 -- nvmf/common.sh@296 -- # e810=() 00:23:57.665 06:50:02 -- nvmf/common.sh@296 -- # local -ga e810 00:23:57.665 06:50:02 -- nvmf/common.sh@297 -- # x722=() 00:23:57.665 06:50:02 -- nvmf/common.sh@297 -- # local -ga x722 00:23:57.665 06:50:02 -- nvmf/common.sh@298 -- # mlx=() 00:23:57.665 06:50:02 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:57.665 06:50:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:57.665 06:50:02 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:57.665 06:50:02 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:57.665 06:50:02 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:57.665 06:50:02 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:57.665 06:50:02 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:57.665 06:50:02 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:57.665 06:50:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:57.665 06:50:02 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:57.665 06:50:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:57.665 06:50:02 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:57.665 06:50:02 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:57.665 06:50:02 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:57.665 06:50:02 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:57.665 06:50:02 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:57.665 06:50:02 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:57.665 06:50:02 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:57.665 06:50:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:57.665 06:50:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:57.665 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:57.665 06:50:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:57.665 06:50:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:57.665 06:50:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.665 06:50:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.665 06:50:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:57.666 06:50:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:57.666 06:50:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:57.666 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:57.666 06:50:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:57.666 06:50:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:57.666 06:50:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.666 06:50:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.666 06:50:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:57.666 06:50:02 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:57.666 06:50:02 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:57.666 06:50:02 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:57.666 06:50:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:57.666 06:50:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.666 06:50:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:57.666 06:50:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.666 06:50:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:57.666 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:57.666 06:50:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.666 06:50:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:57.666 06:50:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.666 06:50:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:57.666 06:50:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.666 06:50:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:57.666 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:57.666 06:50:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.666 06:50:02 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:57.666 06:50:02 -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:57.666 06:50:02 -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:23:57.666 06:50:02 -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:57.666 06:50:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:57.666 06:50:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:57.666 06:50:02 -- common/autotest_common.sh@10 -- # set +x 00:23:57.666 ************************************ 00:23:57.666 START TEST nvmf_perf_adq 00:23:57.666 ************************************ 00:23:57.666 06:50:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:57.666 * Looking for test storage... 00:23:57.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:57.666 06:50:02 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:57.666 06:50:02 -- nvmf/common.sh@7 -- # uname -s 00:23:57.666 06:50:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:57.666 06:50:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:57.666 06:50:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:57.666 06:50:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:57.666 06:50:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:57.666 06:50:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:57.666 06:50:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:57.666 06:50:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:57.666 06:50:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:57.666 06:50:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:57.666 06:50:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:57.666 06:50:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:57.666 06:50:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:57.666 06:50:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:57.666 06:50:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:57.666 06:50:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:57.666 06:50:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:57.666 06:50:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:57.666 06:50:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:57.666 06:50:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:57.666 06:50:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.666 06:50:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.666 06:50:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.666 06:50:02 -- paths/export.sh@5 -- # export PATH 00:23:57.666 06:50:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.666 06:50:02 -- nvmf/common.sh@47 -- # : 0 00:23:57.666 06:50:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:57.666 06:50:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:57.666 06:50:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:57.666 06:50:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:57.666 06:50:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:57.666 06:50:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:57.666 06:50:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:57.666 06:50:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:57.666 06:50:02 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:23:57.666 06:50:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:57.666 06:50:02 -- common/autotest_common.sh@10 -- # set +x 00:23:59.567 06:50:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:59.567 06:50:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:59.567 06:50:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:59.567 06:50:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:59.567 06:50:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:59.567 06:50:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:59.567 06:50:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:59.567 06:50:04 -- nvmf/common.sh@295 -- # net_devs=() 00:23:59.567 06:50:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:59.567 06:50:04 -- nvmf/common.sh@296 -- # e810=() 00:23:59.567 06:50:04 -- nvmf/common.sh@296 -- # local -ga e810 00:23:59.567 06:50:04 -- nvmf/common.sh@297 -- # x722=() 00:23:59.567 06:50:04 -- nvmf/common.sh@297 -- # local -ga x722 00:23:59.567 06:50:04 -- nvmf/common.sh@298 -- # mlx=() 00:23:59.567 06:50:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:59.567 06:50:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:59.567 06:50:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:59.567 06:50:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:59.567 06:50:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:59.567 06:50:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:59.567 06:50:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:59.567 06:50:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:59.567 06:50:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:59.567 06:50:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:59.567 06:50:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:59.567 06:50:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:59.567 06:50:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:59.567 06:50:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:59.567 06:50:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:59.567 06:50:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:59.567 06:50:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:59.567 06:50:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:59.567 06:50:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:59.567 06:50:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:59.567 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:59.567 06:50:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:59.567 06:50:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:59.567 06:50:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.567 06:50:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.567 06:50:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:59.567 06:50:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:59.567 06:50:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:59.567 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:59.567 06:50:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:59.567 06:50:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:59.567 06:50:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.567 06:50:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.567 06:50:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:59.567 06:50:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:59.567 06:50:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:59.567 06:50:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:59.567 06:50:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:59.567 06:50:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.567 06:50:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:59.567 06:50:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.567 06:50:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:59.567 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:59.567 06:50:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.567 06:50:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:59.568 06:50:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.568 06:50:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:59.568 06:50:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.568 06:50:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:59.568 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:59.568 06:50:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.568 06:50:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:59.568 06:50:04 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:59.568 06:50:04 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:59.568 06:50:04 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:59.568 06:50:04 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:23:59.568 06:50:04 -- target/perf_adq.sh@52 -- # rmmod ice 00:24:00.150 06:50:04 -- target/perf_adq.sh@53 -- # modprobe ice 00:24:02.050 06:50:06 -- target/perf_adq.sh@54 -- # sleep 5 00:24:07.371 06:50:11 -- target/perf_adq.sh@67 -- # nvmftestinit 00:24:07.371 06:50:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:07.371 06:50:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:07.371 06:50:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:07.371 06:50:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:07.371 06:50:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:07.371 06:50:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.371 06:50:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:07.371 06:50:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.371 06:50:11 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:07.371 06:50:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:07.371 06:50:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:07.371 06:50:11 -- common/autotest_common.sh@10 -- # set +x 00:24:07.371 06:50:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:07.371 06:50:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:07.371 06:50:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:07.371 06:50:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:07.371 06:50:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:07.371 06:50:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:07.371 06:50:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:07.371 06:50:11 -- nvmf/common.sh@295 -- # net_devs=() 00:24:07.371 06:50:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:07.371 06:50:11 -- nvmf/common.sh@296 -- # e810=() 00:24:07.371 06:50:11 -- nvmf/common.sh@296 -- # local -ga e810 00:24:07.371 06:50:11 -- nvmf/common.sh@297 -- # x722=() 00:24:07.371 06:50:11 -- nvmf/common.sh@297 -- # local -ga x722 00:24:07.371 06:50:11 -- nvmf/common.sh@298 -- # mlx=() 00:24:07.371 06:50:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:07.371 06:50:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:07.371 06:50:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:07.371 06:50:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:07.371 06:50:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:07.371 06:50:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:07.371 06:50:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:07.371 06:50:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:07.371 06:50:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:07.371 06:50:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:07.371 06:50:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:07.371 06:50:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:07.371 06:50:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:07.371 06:50:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:07.371 06:50:11 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:07.371 06:50:11 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:07.371 06:50:11 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:07.371 06:50:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:07.371 06:50:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:07.371 06:50:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:07.371 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:07.371 06:50:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:07.371 06:50:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:07.371 06:50:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.371 06:50:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.371 06:50:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:07.371 06:50:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:07.371 06:50:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:07.371 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:07.371 06:50:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:07.371 06:50:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:07.371 06:50:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.371 06:50:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.371 06:50:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:07.371 06:50:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:07.371 06:50:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:07.371 06:50:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:07.371 06:50:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:07.371 06:50:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.371 06:50:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:07.371 06:50:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.371 06:50:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:07.371 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:07.371 06:50:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.371 06:50:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:07.371 06:50:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.371 06:50:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:07.371 06:50:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.372 06:50:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:07.372 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:07.372 06:50:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.372 06:50:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:07.372 06:50:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:07.372 06:50:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:07.372 06:50:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:07.372 06:50:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:07.372 06:50:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:07.372 06:50:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:07.372 06:50:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:07.372 06:50:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:07.372 06:50:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:07.372 06:50:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:07.372 06:50:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:07.372 06:50:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:07.372 06:50:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:07.372 06:50:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:07.372 06:50:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:07.372 06:50:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:07.372 06:50:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:07.372 06:50:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:07.372 06:50:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:07.372 06:50:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:07.372 06:50:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:07.372 06:50:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:07.372 06:50:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:07.372 06:50:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:07.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:07.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:24:07.372 00:24:07.372 --- 10.0.0.2 ping statistics --- 00:24:07.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.372 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:24:07.372 06:50:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:07.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:07.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:24:07.372 00:24:07.372 --- 10.0.0.1 ping statistics --- 00:24:07.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.372 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:24:07.372 06:50:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:07.372 06:50:11 -- nvmf/common.sh@411 -- # return 0 00:24:07.372 06:50:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:07.372 06:50:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:07.372 06:50:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:07.372 06:50:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:07.372 06:50:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:07.372 06:50:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:07.372 06:50:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:07.372 06:50:11 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:07.372 06:50:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:07.372 06:50:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:07.372 06:50:11 -- common/autotest_common.sh@10 -- # set +x 00:24:07.372 06:50:11 -- nvmf/common.sh@470 -- # nvmfpid=50434 00:24:07.372 06:50:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:07.372 06:50:11 -- nvmf/common.sh@471 -- # waitforlisten 50434 00:24:07.372 06:50:11 -- common/autotest_common.sh@817 -- # '[' -z 50434 ']' 00:24:07.372 06:50:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.372 06:50:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:07.372 06:50:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.372 06:50:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:07.372 06:50:11 -- common/autotest_common.sh@10 -- # set +x 00:24:07.372 [2024-04-17 06:50:11.344073] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:24:07.372 [2024-04-17 06:50:11.344145] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:07.372 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.372 [2024-04-17 06:50:11.407962] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:07.372 [2024-04-17 06:50:11.491335] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.372 [2024-04-17 06:50:11.491387] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.372 [2024-04-17 06:50:11.491411] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:07.372 [2024-04-17 06:50:11.491437] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:07.372 [2024-04-17 06:50:11.491446] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.372 [2024-04-17 06:50:11.491545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.372 [2024-04-17 06:50:11.491610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:07.372 [2024-04-17 06:50:11.491678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:07.372 [2024-04-17 06:50:11.491681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.372 06:50:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:07.372 06:50:11 -- common/autotest_common.sh@850 -- # return 0 00:24:07.372 06:50:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:07.372 06:50:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:07.372 06:50:11 -- common/autotest_common.sh@10 -- # set +x 00:24:07.372 06:50:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.372 06:50:11 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:24:07.372 06:50:11 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:07.372 06:50:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.372 06:50:11 -- common/autotest_common.sh@10 -- # set +x 00:24:07.372 06:50:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.372 06:50:11 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:24:07.372 06:50:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.372 06:50:11 -- common/autotest_common.sh@10 -- # set +x 00:24:07.372 06:50:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.372 06:50:11 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:07.372 06:50:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.372 06:50:11 -- common/autotest_common.sh@10 -- # set +x 00:24:07.372 [2024-04-17 06:50:11.675695] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.372 06:50:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.372 06:50:11 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:07.372 06:50:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.372 06:50:11 -- common/autotest_common.sh@10 -- # set +x 00:24:07.372 Malloc1 00:24:07.372 06:50:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.372 06:50:11 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:07.372 06:50:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.372 06:50:11 -- common/autotest_common.sh@10 -- # set +x 00:24:07.372 06:50:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.372 06:50:11 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:07.372 06:50:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.372 06:50:11 -- common/autotest_common.sh@10 -- # set +x 00:24:07.372 06:50:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.372 06:50:11 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:07.372 06:50:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.372 06:50:11 -- common/autotest_common.sh@10 -- # set +x 00:24:07.372 [2024-04-17 06:50:11.726660] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:07.372 06:50:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.372 06:50:11 -- target/perf_adq.sh@73 -- # perfpid=50465 00:24:07.372 06:50:11 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:07.372 06:50:11 -- target/perf_adq.sh@74 -- # sleep 2 00:24:07.372 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.271 06:50:13 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:24:09.271 06:50:13 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:24:09.271 06:50:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.271 06:50:13 -- target/perf_adq.sh@76 -- # wc -l 00:24:09.271 06:50:13 -- common/autotest_common.sh@10 -- # set +x 00:24:09.271 06:50:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.271 06:50:13 -- target/perf_adq.sh@76 -- # count=4 00:24:09.271 06:50:13 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:24:09.271 06:50:13 -- target/perf_adq.sh@81 -- # wait 50465 00:24:17.382 Initializing NVMe Controllers 00:24:17.382 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:17.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:17.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:17.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:17.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:17.382 Initialization complete. Launching workers. 00:24:17.382 ======================================================== 00:24:17.382 Latency(us) 00:24:17.382 Device Information : IOPS MiB/s Average min max 00:24:17.382 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8697.80 33.98 7358.43 3470.93 10995.72 00:24:17.382 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10602.00 41.41 6036.60 2294.64 9541.43 00:24:17.382 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10338.60 40.39 6191.58 1967.63 8512.00 00:24:17.382 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10434.20 40.76 6134.78 2005.58 8612.72 00:24:17.382 ======================================================== 00:24:17.382 Total : 40072.60 156.53 6389.05 1967.63 10995.72 00:24:17.382 00:24:17.382 06:50:21 -- target/perf_adq.sh@82 -- # nvmftestfini 00:24:17.382 06:50:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:17.382 06:50:21 -- nvmf/common.sh@117 -- # sync 00:24:17.382 06:50:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:17.382 06:50:21 -- nvmf/common.sh@120 -- # set +e 00:24:17.382 06:50:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:17.382 06:50:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:17.382 rmmod nvme_tcp 00:24:17.382 rmmod nvme_fabrics 00:24:17.382 rmmod nvme_keyring 00:24:17.382 06:50:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:17.382 06:50:21 -- nvmf/common.sh@124 -- # set -e 00:24:17.382 06:50:21 -- nvmf/common.sh@125 -- # return 0 00:24:17.382 06:50:21 -- nvmf/common.sh@478 -- # '[' -n 50434 ']' 00:24:17.382 06:50:21 -- nvmf/common.sh@479 -- # killprocess 50434 00:24:17.382 06:50:21 -- common/autotest_common.sh@936 -- # '[' -z 50434 ']' 00:24:17.382 06:50:21 -- common/autotest_common.sh@940 -- # kill -0 50434 00:24:17.382 06:50:21 -- common/autotest_common.sh@941 -- # uname 00:24:17.382 06:50:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:17.382 06:50:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 50434 00:24:17.382 06:50:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:17.382 06:50:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:17.382 06:50:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 50434' 00:24:17.382 killing process with pid 50434 00:24:17.382 06:50:21 -- common/autotest_common.sh@955 -- # kill 50434 00:24:17.382 06:50:21 -- common/autotest_common.sh@960 -- # wait 50434 00:24:17.641 06:50:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:17.641 06:50:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:17.641 06:50:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:17.641 06:50:22 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:17.641 06:50:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:17.641 06:50:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.641 06:50:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:17.641 06:50:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.171 06:50:24 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:20.171 06:50:24 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:24:20.171 06:50:24 -- target/perf_adq.sh@52 -- # rmmod ice 00:24:20.429 06:50:24 -- target/perf_adq.sh@53 -- # modprobe ice 00:24:21.804 06:50:26 -- target/perf_adq.sh@54 -- # sleep 5 00:24:27.075 06:50:31 -- target/perf_adq.sh@87 -- # nvmftestinit 00:24:27.075 06:50:31 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:27.075 06:50:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:27.075 06:50:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:27.075 06:50:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:27.075 06:50:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:27.075 06:50:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.076 06:50:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:27.076 06:50:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.076 06:50:31 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:27.076 06:50:31 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:27.076 06:50:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:27.076 06:50:31 -- common/autotest_common.sh@10 -- # set +x 00:24:27.076 06:50:31 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:27.076 06:50:31 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:27.076 06:50:31 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:27.076 06:50:31 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:27.076 06:50:31 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:27.076 06:50:31 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:27.076 06:50:31 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:27.076 06:50:31 -- nvmf/common.sh@295 -- # net_devs=() 00:24:27.076 06:50:31 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:27.076 06:50:31 -- nvmf/common.sh@296 -- # e810=() 00:24:27.076 06:50:31 -- nvmf/common.sh@296 -- # local -ga e810 00:24:27.076 06:50:31 -- nvmf/common.sh@297 -- # x722=() 00:24:27.076 06:50:31 -- nvmf/common.sh@297 -- # local -ga x722 00:24:27.076 06:50:31 -- nvmf/common.sh@298 -- # mlx=() 00:24:27.076 06:50:31 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:27.076 06:50:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:27.076 06:50:31 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:27.076 06:50:31 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:27.076 06:50:31 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:27.076 06:50:31 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:27.076 06:50:31 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:27.076 06:50:31 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:27.076 06:50:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:27.076 06:50:31 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:27.076 06:50:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:27.076 06:50:31 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:27.076 06:50:31 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:27.076 06:50:31 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:27.076 06:50:31 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:27.076 06:50:31 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:27.076 06:50:31 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:27.076 06:50:31 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:27.076 06:50:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:27.076 06:50:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:27.076 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:27.076 06:50:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:27.076 06:50:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:27.076 06:50:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.076 06:50:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.076 06:50:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:27.076 06:50:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:27.076 06:50:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:27.076 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:27.076 06:50:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:27.076 06:50:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:27.076 06:50:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.076 06:50:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.076 06:50:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:27.076 06:50:31 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:27.076 06:50:31 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:27.076 06:50:31 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:27.076 06:50:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:27.076 06:50:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.076 06:50:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:27.076 06:50:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.076 06:50:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:27.076 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:27.076 06:50:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.076 06:50:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:27.076 06:50:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.076 06:50:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:27.076 06:50:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.076 06:50:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:27.076 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:27.076 06:50:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.076 06:50:31 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:27.076 06:50:31 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:27.076 06:50:31 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:27.076 06:50:31 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:27.076 06:50:31 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:27.076 06:50:31 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:27.076 06:50:31 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:27.076 06:50:31 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:27.076 06:50:31 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:27.076 06:50:31 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:27.076 06:50:31 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:27.076 06:50:31 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:27.076 06:50:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:27.076 06:50:31 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:27.076 06:50:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:27.076 06:50:31 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:27.076 06:50:31 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:27.076 06:50:31 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:27.076 06:50:31 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:27.076 06:50:31 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:27.076 06:50:31 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:27.076 06:50:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:27.076 06:50:31 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:27.076 06:50:31 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:27.076 06:50:31 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:27.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:27.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:24:27.076 00:24:27.076 --- 10.0.0.2 ping statistics --- 00:24:27.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.076 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:24:27.076 06:50:31 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:27.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:27.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:24:27.076 00:24:27.076 --- 10.0.0.1 ping statistics --- 00:24:27.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.076 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:24:27.076 06:50:31 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:27.076 06:50:31 -- nvmf/common.sh@411 -- # return 0 00:24:27.076 06:50:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:27.076 06:50:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:27.076 06:50:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:27.076 06:50:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:27.076 06:50:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:27.076 06:50:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:27.076 06:50:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:27.076 06:50:31 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:24:27.076 06:50:31 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:24:27.076 06:50:31 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:24:27.076 06:50:31 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:24:27.076 net.core.busy_poll = 1 00:24:27.076 06:50:31 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:24:27.076 net.core.busy_read = 1 00:24:27.076 06:50:31 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:24:27.076 06:50:31 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:24:27.076 06:50:31 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:24:27.076 06:50:31 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:24:27.076 06:50:31 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:24:27.335 06:50:31 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:27.335 06:50:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:27.335 06:50:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:27.335 06:50:31 -- common/autotest_common.sh@10 -- # set +x 00:24:27.335 06:50:31 -- nvmf/common.sh@470 -- # nvmfpid=53067 00:24:27.335 06:50:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:27.335 06:50:31 -- nvmf/common.sh@471 -- # waitforlisten 53067 00:24:27.335 06:50:31 -- common/autotest_common.sh@817 -- # '[' -z 53067 ']' 00:24:27.335 06:50:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.335 06:50:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:27.335 06:50:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.335 06:50:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:27.335 06:50:31 -- common/autotest_common.sh@10 -- # set +x 00:24:27.335 [2024-04-17 06:50:31.737124] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:24:27.335 [2024-04-17 06:50:31.737236] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.335 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.335 [2024-04-17 06:50:31.802790] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:27.335 [2024-04-17 06:50:31.890638] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.335 [2024-04-17 06:50:31.890696] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.335 [2024-04-17 06:50:31.890710] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:27.335 [2024-04-17 06:50:31.890722] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:27.335 [2024-04-17 06:50:31.890731] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.335 [2024-04-17 06:50:31.890785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:27.335 [2024-04-17 06:50:31.890845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:27.335 [2024-04-17 06:50:31.890910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:27.335 [2024-04-17 06:50:31.890913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.335 06:50:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:27.335 06:50:31 -- common/autotest_common.sh@850 -- # return 0 00:24:27.335 06:50:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:27.335 06:50:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:27.335 06:50:31 -- common/autotest_common.sh@10 -- # set +x 00:24:27.600 06:50:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.600 06:50:31 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:24:27.600 06:50:31 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:24:27.600 06:50:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.600 06:50:31 -- common/autotest_common.sh@10 -- # set +x 00:24:27.600 06:50:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.600 06:50:31 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:24:27.600 06:50:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.600 06:50:31 -- common/autotest_common.sh@10 -- # set +x 00:24:27.600 06:50:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.600 06:50:32 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:24:27.601 06:50:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.601 06:50:32 -- common/autotest_common.sh@10 -- # set +x 00:24:27.601 [2024-04-17 06:50:32.088106] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.601 06:50:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.601 06:50:32 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:27.601 06:50:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.601 06:50:32 -- common/autotest_common.sh@10 -- # set +x 00:24:27.601 Malloc1 00:24:27.601 06:50:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.601 06:50:32 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:27.601 06:50:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.601 06:50:32 -- common/autotest_common.sh@10 -- # set +x 00:24:27.601 06:50:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.601 06:50:32 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:27.601 06:50:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.601 06:50:32 -- common/autotest_common.sh@10 -- # set +x 00:24:27.601 06:50:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.601 06:50:32 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:27.601 06:50:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.601 06:50:32 -- common/autotest_common.sh@10 -- # set +x 00:24:27.601 [2024-04-17 06:50:32.141362] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.601 06:50:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.601 06:50:32 -- target/perf_adq.sh@94 -- # perfpid=53101 00:24:27.601 06:50:32 -- target/perf_adq.sh@95 -- # sleep 2 00:24:27.601 06:50:32 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:27.601 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.153 06:50:34 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:24:30.153 06:50:34 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:24:30.153 06:50:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.153 06:50:34 -- common/autotest_common.sh@10 -- # set +x 00:24:30.153 06:50:34 -- target/perf_adq.sh@97 -- # wc -l 00:24:30.153 06:50:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.153 06:50:34 -- target/perf_adq.sh@97 -- # count=3 00:24:30.153 06:50:34 -- target/perf_adq.sh@98 -- # [[ 3 -lt 2 ]] 00:24:30.153 06:50:34 -- target/perf_adq.sh@103 -- # wait 53101 00:24:38.258 Initializing NVMe Controllers 00:24:38.258 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:38.258 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:38.258 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:38.258 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:38.258 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:38.258 Initialization complete. Launching workers. 00:24:38.258 ======================================================== 00:24:38.258 Latency(us) 00:24:38.258 Device Information : IOPS MiB/s Average min max 00:24:38.258 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4273.80 16.69 14982.94 2258.58 62468.47 00:24:38.258 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4423.40 17.28 14473.89 1946.00 62102.23 00:24:38.258 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4799.60 18.75 13396.40 1375.41 62392.67 00:24:38.258 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4286.70 16.74 14963.60 2205.73 62367.91 00:24:38.258 ======================================================== 00:24:38.258 Total : 17783.50 69.47 14423.47 1375.41 62468.47 00:24:38.258 00:24:38.258 06:50:42 -- target/perf_adq.sh@104 -- # nvmftestfini 00:24:38.258 06:50:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:38.258 06:50:42 -- nvmf/common.sh@117 -- # sync 00:24:38.258 06:50:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:38.258 06:50:42 -- nvmf/common.sh@120 -- # set +e 00:24:38.258 06:50:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:38.258 06:50:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:38.258 rmmod nvme_tcp 00:24:38.258 rmmod nvme_fabrics 00:24:38.258 rmmod nvme_keyring 00:24:38.258 06:50:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:38.258 06:50:42 -- nvmf/common.sh@124 -- # set -e 00:24:38.258 06:50:42 -- nvmf/common.sh@125 -- # return 0 00:24:38.258 06:50:42 -- nvmf/common.sh@478 -- # '[' -n 53067 ']' 00:24:38.258 06:50:42 -- nvmf/common.sh@479 -- # killprocess 53067 00:24:38.258 06:50:42 -- common/autotest_common.sh@936 -- # '[' -z 53067 ']' 00:24:38.258 06:50:42 -- common/autotest_common.sh@940 -- # kill -0 53067 00:24:38.258 06:50:42 -- common/autotest_common.sh@941 -- # uname 00:24:38.258 06:50:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:38.258 06:50:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 53067 00:24:38.258 06:50:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:38.258 06:50:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:38.258 06:50:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 53067' 00:24:38.258 killing process with pid 53067 00:24:38.258 06:50:42 -- common/autotest_common.sh@955 -- # kill 53067 00:24:38.258 06:50:42 -- common/autotest_common.sh@960 -- # wait 53067 00:24:38.258 06:50:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:38.258 06:50:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:38.259 06:50:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:38.259 06:50:42 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:38.259 06:50:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:38.259 06:50:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.259 06:50:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:38.259 06:50:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.160 06:50:44 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:40.160 06:50:44 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:24:40.160 00:24:40.160 real 0m42.570s 00:24:40.160 user 2m34.432s 00:24:40.160 sys 0m11.134s 00:24:40.160 06:50:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:40.160 06:50:44 -- common/autotest_common.sh@10 -- # set +x 00:24:40.160 ************************************ 00:24:40.160 END TEST nvmf_perf_adq 00:24:40.160 ************************************ 00:24:40.160 06:50:44 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:40.160 06:50:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:40.160 06:50:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:40.160 06:50:44 -- common/autotest_common.sh@10 -- # set +x 00:24:40.418 ************************************ 00:24:40.418 START TEST nvmf_shutdown 00:24:40.418 ************************************ 00:24:40.418 06:50:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:40.418 * Looking for test storage... 00:24:40.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:40.418 06:50:44 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:40.418 06:50:44 -- nvmf/common.sh@7 -- # uname -s 00:24:40.418 06:50:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:40.418 06:50:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:40.418 06:50:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:40.418 06:50:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:40.418 06:50:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:40.418 06:50:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:40.418 06:50:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:40.418 06:50:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:40.418 06:50:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:40.418 06:50:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:40.418 06:50:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:40.418 06:50:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:40.418 06:50:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:40.419 06:50:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:40.419 06:50:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:40.419 06:50:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:40.419 06:50:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:40.419 06:50:44 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:40.419 06:50:44 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:40.419 06:50:44 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:40.419 06:50:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.419 06:50:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.419 06:50:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.419 06:50:44 -- paths/export.sh@5 -- # export PATH 00:24:40.419 06:50:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.419 06:50:44 -- nvmf/common.sh@47 -- # : 0 00:24:40.419 06:50:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:40.419 06:50:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:40.419 06:50:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:40.419 06:50:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:40.419 06:50:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:40.419 06:50:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:40.419 06:50:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:40.419 06:50:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:40.419 06:50:44 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:40.419 06:50:44 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:40.419 06:50:44 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:40.419 06:50:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:40.419 06:50:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:40.419 06:50:44 -- common/autotest_common.sh@10 -- # set +x 00:24:40.419 ************************************ 00:24:40.419 START TEST nvmf_shutdown_tc1 00:24:40.419 ************************************ 00:24:40.419 06:50:45 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:24:40.419 06:50:45 -- target/shutdown.sh@74 -- # starttarget 00:24:40.419 06:50:45 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:40.419 06:50:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:40.419 06:50:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:40.419 06:50:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:40.419 06:50:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:40.419 06:50:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:40.419 06:50:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.419 06:50:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:40.419 06:50:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.678 06:50:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:40.678 06:50:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:40.678 06:50:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:40.678 06:50:45 -- common/autotest_common.sh@10 -- # set +x 00:24:42.579 06:50:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:42.579 06:50:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:42.579 06:50:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:42.579 06:50:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:42.579 06:50:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:42.579 06:50:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:42.579 06:50:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:42.579 06:50:46 -- nvmf/common.sh@295 -- # net_devs=() 00:24:42.579 06:50:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:42.579 06:50:46 -- nvmf/common.sh@296 -- # e810=() 00:24:42.579 06:50:46 -- nvmf/common.sh@296 -- # local -ga e810 00:24:42.579 06:50:46 -- nvmf/common.sh@297 -- # x722=() 00:24:42.579 06:50:46 -- nvmf/common.sh@297 -- # local -ga x722 00:24:42.579 06:50:46 -- nvmf/common.sh@298 -- # mlx=() 00:24:42.579 06:50:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:42.579 06:50:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:42.579 06:50:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:42.579 06:50:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:42.579 06:50:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:42.579 06:50:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:42.579 06:50:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:42.579 06:50:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:42.579 06:50:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:42.579 06:50:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:42.579 06:50:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:42.579 06:50:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:42.579 06:50:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:42.579 06:50:46 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:42.579 06:50:46 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:42.579 06:50:46 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:42.579 06:50:46 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:42.579 06:50:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:42.579 06:50:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:42.579 06:50:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:42.579 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:42.579 06:50:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:42.579 06:50:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:42.579 06:50:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.579 06:50:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.579 06:50:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:42.579 06:50:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:42.579 06:50:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:42.579 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:42.579 06:50:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:42.579 06:50:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:42.579 06:50:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.579 06:50:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.579 06:50:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:42.579 06:50:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:42.579 06:50:46 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:42.579 06:50:46 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:42.579 06:50:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:42.579 06:50:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.579 06:50:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:42.579 06:50:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.579 06:50:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:42.579 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:42.579 06:50:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.579 06:50:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:42.579 06:50:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.579 06:50:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:42.579 06:50:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.579 06:50:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:42.579 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:42.579 06:50:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.579 06:50:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:42.579 06:50:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:42.579 06:50:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:42.579 06:50:46 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:42.579 06:50:46 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:42.579 06:50:46 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:42.579 06:50:46 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:42.579 06:50:46 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:42.579 06:50:46 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:42.579 06:50:46 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:42.579 06:50:46 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:42.579 06:50:46 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:42.579 06:50:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:42.579 06:50:46 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:42.579 06:50:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:42.579 06:50:46 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:42.579 06:50:46 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:42.579 06:50:46 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:42.579 06:50:46 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:42.579 06:50:46 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:42.579 06:50:46 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:42.579 06:50:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:42.579 06:50:46 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:42.579 06:50:46 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:42.579 06:50:46 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:42.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:42.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:24:42.579 00:24:42.579 --- 10.0.0.2 ping statistics --- 00:24:42.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.579 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:24:42.579 06:50:46 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:42.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:42.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:24:42.579 00:24:42.579 --- 10.0.0.1 ping statistics --- 00:24:42.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.579 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:24:42.579 06:50:46 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:42.579 06:50:46 -- nvmf/common.sh@411 -- # return 0 00:24:42.579 06:50:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:42.579 06:50:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:42.579 06:50:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:42.579 06:50:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:42.580 06:50:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:42.580 06:50:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:42.580 06:50:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:42.580 06:50:47 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:42.580 06:50:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:42.580 06:50:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:42.580 06:50:47 -- common/autotest_common.sh@10 -- # set +x 00:24:42.580 06:50:47 -- nvmf/common.sh@470 -- # nvmfpid=56277 00:24:42.580 06:50:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:42.580 06:50:47 -- nvmf/common.sh@471 -- # waitforlisten 56277 00:24:42.580 06:50:47 -- common/autotest_common.sh@817 -- # '[' -z 56277 ']' 00:24:42.580 06:50:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.580 06:50:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:42.580 06:50:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.580 06:50:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:42.580 06:50:47 -- common/autotest_common.sh@10 -- # set +x 00:24:42.580 [2024-04-17 06:50:47.066218] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:24:42.580 [2024-04-17 06:50:47.066320] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.580 EAL: No free 2048 kB hugepages reported on node 1 00:24:42.580 [2024-04-17 06:50:47.131224] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:42.838 [2024-04-17 06:50:47.215732] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.838 [2024-04-17 06:50:47.215783] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.838 [2024-04-17 06:50:47.215806] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:42.838 [2024-04-17 06:50:47.215818] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:42.838 [2024-04-17 06:50:47.215828] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.838 [2024-04-17 06:50:47.215931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:42.838 [2024-04-17 06:50:47.216064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:42.838 [2024-04-17 06:50:47.216131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:42.838 [2024-04-17 06:50:47.216133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.838 06:50:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:42.838 06:50:47 -- common/autotest_common.sh@850 -- # return 0 00:24:42.838 06:50:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:42.838 06:50:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:42.838 06:50:47 -- common/autotest_common.sh@10 -- # set +x 00:24:42.838 06:50:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.838 06:50:47 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:42.838 06:50:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.838 06:50:47 -- common/autotest_common.sh@10 -- # set +x 00:24:42.838 [2024-04-17 06:50:47.366843] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:42.838 06:50:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.838 06:50:47 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:42.838 06:50:47 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:42.838 06:50:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:42.838 06:50:47 -- common/autotest_common.sh@10 -- # set +x 00:24:42.838 06:50:47 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:42.838 06:50:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:42.838 06:50:47 -- target/shutdown.sh@28 -- # cat 00:24:42.838 06:50:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:42.838 06:50:47 -- target/shutdown.sh@28 -- # cat 00:24:42.838 06:50:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:42.838 06:50:47 -- target/shutdown.sh@28 -- # cat 00:24:42.838 06:50:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:42.838 06:50:47 -- target/shutdown.sh@28 -- # cat 00:24:42.838 06:50:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:42.838 06:50:47 -- target/shutdown.sh@28 -- # cat 00:24:42.838 06:50:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:42.838 06:50:47 -- target/shutdown.sh@28 -- # cat 00:24:42.838 06:50:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:42.838 06:50:47 -- target/shutdown.sh@28 -- # cat 00:24:42.838 06:50:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:42.838 06:50:47 -- target/shutdown.sh@28 -- # cat 00:24:42.838 06:50:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:42.838 06:50:47 -- target/shutdown.sh@28 -- # cat 00:24:42.838 06:50:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:42.838 06:50:47 -- target/shutdown.sh@28 -- # cat 00:24:42.838 06:50:47 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:42.838 06:50:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.838 06:50:47 -- common/autotest_common.sh@10 -- # set +x 00:24:42.839 Malloc1 00:24:42.839 [2024-04-17 06:50:47.443017] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:43.096 Malloc2 00:24:43.096 Malloc3 00:24:43.096 Malloc4 00:24:43.096 Malloc5 00:24:43.096 Malloc6 00:24:43.355 Malloc7 00:24:43.355 Malloc8 00:24:43.355 Malloc9 00:24:43.355 Malloc10 00:24:43.355 06:50:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.355 06:50:47 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:43.355 06:50:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:43.355 06:50:47 -- common/autotest_common.sh@10 -- # set +x 00:24:43.355 06:50:47 -- target/shutdown.sh@78 -- # perfpid=56448 00:24:43.355 06:50:47 -- target/shutdown.sh@79 -- # waitforlisten 56448 /var/tmp/bdevperf.sock 00:24:43.355 06:50:47 -- common/autotest_common.sh@817 -- # '[' -z 56448 ']' 00:24:43.355 06:50:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:43.355 06:50:47 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:43.355 06:50:47 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:43.355 06:50:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:43.355 06:50:47 -- nvmf/common.sh@521 -- # config=() 00:24:43.355 06:50:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:43.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:43.355 06:50:47 -- nvmf/common.sh@521 -- # local subsystem config 00:24:43.355 06:50:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:43.355 06:50:47 -- common/autotest_common.sh@10 -- # set +x 00:24:43.355 06:50:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:43.355 06:50:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:43.355 { 00:24:43.355 "params": { 00:24:43.355 "name": "Nvme$subsystem", 00:24:43.355 "trtype": "$TEST_TRANSPORT", 00:24:43.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.355 "adrfam": "ipv4", 00:24:43.355 "trsvcid": "$NVMF_PORT", 00:24:43.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.355 "hdgst": ${hdgst:-false}, 00:24:43.355 "ddgst": ${ddgst:-false} 00:24:43.355 }, 00:24:43.355 "method": "bdev_nvme_attach_controller" 00:24:43.355 } 00:24:43.355 EOF 00:24:43.355 )") 00:24:43.355 06:50:47 -- nvmf/common.sh@543 -- # cat 00:24:43.355 06:50:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:43.355 06:50:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:43.355 { 00:24:43.355 "params": { 00:24:43.355 "name": "Nvme$subsystem", 00:24:43.355 "trtype": "$TEST_TRANSPORT", 00:24:43.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.356 "adrfam": "ipv4", 00:24:43.356 "trsvcid": "$NVMF_PORT", 00:24:43.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.356 "hdgst": ${hdgst:-false}, 00:24:43.356 "ddgst": ${ddgst:-false} 00:24:43.356 }, 00:24:43.356 "method": "bdev_nvme_attach_controller" 00:24:43.356 } 00:24:43.356 EOF 00:24:43.356 )") 00:24:43.356 06:50:47 -- nvmf/common.sh@543 -- # cat 00:24:43.356 06:50:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:43.356 06:50:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:43.356 { 00:24:43.356 "params": { 00:24:43.356 "name": "Nvme$subsystem", 00:24:43.356 "trtype": "$TEST_TRANSPORT", 00:24:43.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.356 "adrfam": "ipv4", 00:24:43.356 "trsvcid": "$NVMF_PORT", 00:24:43.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.356 "hdgst": ${hdgst:-false}, 00:24:43.356 "ddgst": ${ddgst:-false} 00:24:43.356 }, 00:24:43.356 "method": "bdev_nvme_attach_controller" 00:24:43.356 } 00:24:43.356 EOF 00:24:43.356 )") 00:24:43.356 06:50:47 -- nvmf/common.sh@543 -- # cat 00:24:43.356 06:50:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:43.356 06:50:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:43.356 { 00:24:43.356 "params": { 00:24:43.356 "name": "Nvme$subsystem", 00:24:43.356 "trtype": "$TEST_TRANSPORT", 00:24:43.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.356 "adrfam": "ipv4", 00:24:43.356 "trsvcid": "$NVMF_PORT", 00:24:43.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.356 "hdgst": ${hdgst:-false}, 00:24:43.356 "ddgst": ${ddgst:-false} 00:24:43.356 }, 00:24:43.356 "method": "bdev_nvme_attach_controller" 00:24:43.356 } 00:24:43.356 EOF 00:24:43.356 )") 00:24:43.356 06:50:47 -- nvmf/common.sh@543 -- # cat 00:24:43.356 06:50:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:43.356 06:50:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:43.356 { 00:24:43.356 "params": { 00:24:43.356 "name": "Nvme$subsystem", 00:24:43.356 "trtype": "$TEST_TRANSPORT", 00:24:43.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.356 "adrfam": "ipv4", 00:24:43.356 "trsvcid": "$NVMF_PORT", 00:24:43.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.356 "hdgst": ${hdgst:-false}, 00:24:43.356 "ddgst": ${ddgst:-false} 00:24:43.356 }, 00:24:43.356 "method": "bdev_nvme_attach_controller" 00:24:43.356 } 00:24:43.356 EOF 00:24:43.356 )") 00:24:43.356 06:50:47 -- nvmf/common.sh@543 -- # cat 00:24:43.356 06:50:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:43.356 06:50:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:43.356 { 00:24:43.356 "params": { 00:24:43.356 "name": "Nvme$subsystem", 00:24:43.356 "trtype": "$TEST_TRANSPORT", 00:24:43.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.356 "adrfam": "ipv4", 00:24:43.356 "trsvcid": "$NVMF_PORT", 00:24:43.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.356 "hdgst": ${hdgst:-false}, 00:24:43.356 "ddgst": ${ddgst:-false} 00:24:43.356 }, 00:24:43.356 "method": "bdev_nvme_attach_controller" 00:24:43.356 } 00:24:43.356 EOF 00:24:43.356 )") 00:24:43.356 06:50:47 -- nvmf/common.sh@543 -- # cat 00:24:43.356 06:50:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:43.356 06:50:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:43.356 { 00:24:43.356 "params": { 00:24:43.356 "name": "Nvme$subsystem", 00:24:43.356 "trtype": "$TEST_TRANSPORT", 00:24:43.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.356 "adrfam": "ipv4", 00:24:43.356 "trsvcid": "$NVMF_PORT", 00:24:43.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.356 "hdgst": ${hdgst:-false}, 00:24:43.356 "ddgst": ${ddgst:-false} 00:24:43.356 }, 00:24:43.356 "method": "bdev_nvme_attach_controller" 00:24:43.356 } 00:24:43.356 EOF 00:24:43.356 )") 00:24:43.356 06:50:47 -- nvmf/common.sh@543 -- # cat 00:24:43.356 06:50:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:43.356 06:50:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:43.356 { 00:24:43.356 "params": { 00:24:43.356 "name": "Nvme$subsystem", 00:24:43.356 "trtype": "$TEST_TRANSPORT", 00:24:43.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.356 "adrfam": "ipv4", 00:24:43.356 "trsvcid": "$NVMF_PORT", 00:24:43.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.356 "hdgst": ${hdgst:-false}, 00:24:43.356 "ddgst": ${ddgst:-false} 00:24:43.356 }, 00:24:43.356 "method": "bdev_nvme_attach_controller" 00:24:43.356 } 00:24:43.356 EOF 00:24:43.356 )") 00:24:43.356 06:50:47 -- nvmf/common.sh@543 -- # cat 00:24:43.356 06:50:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:43.356 06:50:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:43.356 { 00:24:43.356 "params": { 00:24:43.356 "name": "Nvme$subsystem", 00:24:43.356 "trtype": "$TEST_TRANSPORT", 00:24:43.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.356 "adrfam": "ipv4", 00:24:43.356 "trsvcid": "$NVMF_PORT", 00:24:43.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.356 "hdgst": ${hdgst:-false}, 00:24:43.356 "ddgst": ${ddgst:-false} 00:24:43.356 }, 00:24:43.356 "method": "bdev_nvme_attach_controller" 00:24:43.356 } 00:24:43.356 EOF 00:24:43.356 )") 00:24:43.356 06:50:47 -- nvmf/common.sh@543 -- # cat 00:24:43.356 06:50:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:43.356 06:50:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:43.356 { 00:24:43.356 "params": { 00:24:43.356 "name": "Nvme$subsystem", 00:24:43.356 "trtype": "$TEST_TRANSPORT", 00:24:43.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.356 "adrfam": "ipv4", 00:24:43.356 "trsvcid": "$NVMF_PORT", 00:24:43.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.356 "hdgst": ${hdgst:-false}, 00:24:43.356 "ddgst": ${ddgst:-false} 00:24:43.356 }, 00:24:43.356 "method": "bdev_nvme_attach_controller" 00:24:43.356 } 00:24:43.356 EOF 00:24:43.356 )") 00:24:43.356 06:50:47 -- nvmf/common.sh@543 -- # cat 00:24:43.356 06:50:47 -- nvmf/common.sh@545 -- # jq . 00:24:43.356 06:50:47 -- nvmf/common.sh@546 -- # IFS=, 00:24:43.356 06:50:47 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:24:43.356 "params": { 00:24:43.356 "name": "Nvme1", 00:24:43.356 "trtype": "tcp", 00:24:43.356 "traddr": "10.0.0.2", 00:24:43.356 "adrfam": "ipv4", 00:24:43.356 "trsvcid": "4420", 00:24:43.356 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:43.356 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:43.356 "hdgst": false, 00:24:43.356 "ddgst": false 00:24:43.356 }, 00:24:43.356 "method": "bdev_nvme_attach_controller" 00:24:43.356 },{ 00:24:43.356 "params": { 00:24:43.356 "name": "Nvme2", 00:24:43.356 "trtype": "tcp", 00:24:43.356 "traddr": "10.0.0.2", 00:24:43.356 "adrfam": "ipv4", 00:24:43.356 "trsvcid": "4420", 00:24:43.356 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:43.356 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:43.356 "hdgst": false, 00:24:43.356 "ddgst": false 00:24:43.356 }, 00:24:43.356 "method": "bdev_nvme_attach_controller" 00:24:43.356 },{ 00:24:43.356 "params": { 00:24:43.356 "name": "Nvme3", 00:24:43.356 "trtype": "tcp", 00:24:43.357 "traddr": "10.0.0.2", 00:24:43.357 "adrfam": "ipv4", 00:24:43.357 "trsvcid": "4420", 00:24:43.357 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:43.357 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:43.357 "hdgst": false, 00:24:43.357 "ddgst": false 00:24:43.357 }, 00:24:43.357 "method": "bdev_nvme_attach_controller" 00:24:43.357 },{ 00:24:43.357 "params": { 00:24:43.357 "name": "Nvme4", 00:24:43.357 "trtype": "tcp", 00:24:43.357 "traddr": "10.0.0.2", 00:24:43.357 "adrfam": "ipv4", 00:24:43.357 "trsvcid": "4420", 00:24:43.357 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:43.357 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:43.357 "hdgst": false, 00:24:43.357 "ddgst": false 00:24:43.357 }, 00:24:43.357 "method": "bdev_nvme_attach_controller" 00:24:43.357 },{ 00:24:43.357 "params": { 00:24:43.357 "name": "Nvme5", 00:24:43.357 "trtype": "tcp", 00:24:43.357 "traddr": "10.0.0.2", 00:24:43.357 "adrfam": "ipv4", 00:24:43.357 "trsvcid": "4420", 00:24:43.357 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:43.357 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:43.357 "hdgst": false, 00:24:43.357 "ddgst": false 00:24:43.357 }, 00:24:43.357 "method": "bdev_nvme_attach_controller" 00:24:43.357 },{ 00:24:43.357 "params": { 00:24:43.357 "name": "Nvme6", 00:24:43.357 "trtype": "tcp", 00:24:43.357 "traddr": "10.0.0.2", 00:24:43.357 "adrfam": "ipv4", 00:24:43.357 "trsvcid": "4420", 00:24:43.357 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:43.357 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:43.357 "hdgst": false, 00:24:43.357 "ddgst": false 00:24:43.357 }, 00:24:43.357 "method": "bdev_nvme_attach_controller" 00:24:43.357 },{ 00:24:43.357 "params": { 00:24:43.357 "name": "Nvme7", 00:24:43.357 "trtype": "tcp", 00:24:43.357 "traddr": "10.0.0.2", 00:24:43.357 "adrfam": "ipv4", 00:24:43.357 "trsvcid": "4420", 00:24:43.357 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:43.357 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:43.357 "hdgst": false, 00:24:43.357 "ddgst": false 00:24:43.357 }, 00:24:43.357 "method": "bdev_nvme_attach_controller" 00:24:43.357 },{ 00:24:43.357 "params": { 00:24:43.357 "name": "Nvme8", 00:24:43.357 "trtype": "tcp", 00:24:43.357 "traddr": "10.0.0.2", 00:24:43.357 "adrfam": "ipv4", 00:24:43.357 "trsvcid": "4420", 00:24:43.357 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:43.357 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:43.357 "hdgst": false, 00:24:43.357 "ddgst": false 00:24:43.357 }, 00:24:43.357 "method": "bdev_nvme_attach_controller" 00:24:43.357 },{ 00:24:43.357 "params": { 00:24:43.357 "name": "Nvme9", 00:24:43.357 "trtype": "tcp", 00:24:43.357 "traddr": "10.0.0.2", 00:24:43.357 "adrfam": "ipv4", 00:24:43.357 "trsvcid": "4420", 00:24:43.357 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:43.357 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:43.357 "hdgst": false, 00:24:43.357 "ddgst": false 00:24:43.357 }, 00:24:43.357 "method": "bdev_nvme_attach_controller" 00:24:43.357 },{ 00:24:43.357 "params": { 00:24:43.357 "name": "Nvme10", 00:24:43.357 "trtype": "tcp", 00:24:43.357 "traddr": "10.0.0.2", 00:24:43.357 "adrfam": "ipv4", 00:24:43.357 "trsvcid": "4420", 00:24:43.357 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:43.357 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:43.357 "hdgst": false, 00:24:43.357 "ddgst": false 00:24:43.357 }, 00:24:43.357 "method": "bdev_nvme_attach_controller" 00:24:43.357 }' 00:24:43.357 [2024-04-17 06:50:47.957216] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:24:43.357 [2024-04-17 06:50:47.957294] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:43.615 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.615 [2024-04-17 06:50:48.022414] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.615 [2024-04-17 06:50:48.106932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.512 06:50:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:45.512 06:50:49 -- common/autotest_common.sh@850 -- # return 0 00:24:45.512 06:50:49 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:45.512 06:50:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.512 06:50:49 -- common/autotest_common.sh@10 -- # set +x 00:24:45.512 06:50:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.512 06:50:49 -- target/shutdown.sh@83 -- # kill -9 56448 00:24:45.512 06:50:49 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:24:45.512 06:50:49 -- target/shutdown.sh@87 -- # sleep 1 00:24:46.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 56448 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:46.444 06:50:50 -- target/shutdown.sh@88 -- # kill -0 56277 00:24:46.444 06:50:50 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:46.444 06:50:50 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:46.444 06:50:50 -- nvmf/common.sh@521 -- # config=() 00:24:46.444 06:50:50 -- nvmf/common.sh@521 -- # local subsystem config 00:24:46.444 06:50:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:46.444 06:50:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:46.444 { 00:24:46.444 "params": { 00:24:46.444 "name": "Nvme$subsystem", 00:24:46.444 "trtype": "$TEST_TRANSPORT", 00:24:46.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:46.444 "adrfam": "ipv4", 00:24:46.444 "trsvcid": "$NVMF_PORT", 00:24:46.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:46.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:46.444 "hdgst": ${hdgst:-false}, 00:24:46.444 "ddgst": ${ddgst:-false} 00:24:46.444 }, 00:24:46.444 "method": "bdev_nvme_attach_controller" 00:24:46.444 } 00:24:46.444 EOF 00:24:46.444 )") 00:24:46.444 06:50:50 -- nvmf/common.sh@543 -- # cat 00:24:46.444 06:50:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:46.444 06:50:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:46.444 { 00:24:46.444 "params": { 00:24:46.444 "name": "Nvme$subsystem", 00:24:46.444 "trtype": "$TEST_TRANSPORT", 00:24:46.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:46.444 "adrfam": "ipv4", 00:24:46.444 "trsvcid": "$NVMF_PORT", 00:24:46.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:46.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:46.444 "hdgst": ${hdgst:-false}, 00:24:46.444 "ddgst": ${ddgst:-false} 00:24:46.444 }, 00:24:46.444 "method": "bdev_nvme_attach_controller" 00:24:46.444 } 00:24:46.444 EOF 00:24:46.444 )") 00:24:46.444 06:50:50 -- nvmf/common.sh@543 -- # cat 00:24:46.444 06:50:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:46.444 06:50:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:46.444 { 00:24:46.444 "params": { 00:24:46.444 "name": "Nvme$subsystem", 00:24:46.444 "trtype": "$TEST_TRANSPORT", 00:24:46.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:46.445 "adrfam": "ipv4", 00:24:46.445 "trsvcid": "$NVMF_PORT", 00:24:46.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:46.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:46.445 "hdgst": ${hdgst:-false}, 00:24:46.445 "ddgst": ${ddgst:-false} 00:24:46.445 }, 00:24:46.445 "method": "bdev_nvme_attach_controller" 00:24:46.445 } 00:24:46.445 EOF 00:24:46.445 )") 00:24:46.445 06:50:50 -- nvmf/common.sh@543 -- # cat 00:24:46.445 06:50:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:46.445 06:50:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:46.445 { 00:24:46.445 "params": { 00:24:46.445 "name": "Nvme$subsystem", 00:24:46.445 "trtype": "$TEST_TRANSPORT", 00:24:46.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:46.445 "adrfam": "ipv4", 00:24:46.445 "trsvcid": "$NVMF_PORT", 00:24:46.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:46.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:46.445 "hdgst": ${hdgst:-false}, 00:24:46.445 "ddgst": ${ddgst:-false} 00:24:46.445 }, 00:24:46.445 "method": "bdev_nvme_attach_controller" 00:24:46.445 } 00:24:46.445 EOF 00:24:46.445 )") 00:24:46.445 06:50:50 -- nvmf/common.sh@543 -- # cat 00:24:46.445 06:50:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:46.445 06:50:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:46.445 { 00:24:46.445 "params": { 00:24:46.445 "name": "Nvme$subsystem", 00:24:46.445 "trtype": "$TEST_TRANSPORT", 00:24:46.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:46.445 "adrfam": "ipv4", 00:24:46.445 "trsvcid": "$NVMF_PORT", 00:24:46.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:46.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:46.445 "hdgst": ${hdgst:-false}, 00:24:46.445 "ddgst": ${ddgst:-false} 00:24:46.445 }, 00:24:46.445 "method": "bdev_nvme_attach_controller" 00:24:46.445 } 00:24:46.445 EOF 00:24:46.445 )") 00:24:46.445 06:50:50 -- nvmf/common.sh@543 -- # cat 00:24:46.445 06:50:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:46.445 06:50:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:46.445 { 00:24:46.445 "params": { 00:24:46.445 "name": "Nvme$subsystem", 00:24:46.445 "trtype": "$TEST_TRANSPORT", 00:24:46.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:46.445 "adrfam": "ipv4", 00:24:46.445 "trsvcid": "$NVMF_PORT", 00:24:46.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:46.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:46.445 "hdgst": ${hdgst:-false}, 00:24:46.445 "ddgst": ${ddgst:-false} 00:24:46.445 }, 00:24:46.445 "method": "bdev_nvme_attach_controller" 00:24:46.445 } 00:24:46.445 EOF 00:24:46.445 )") 00:24:46.445 06:50:50 -- nvmf/common.sh@543 -- # cat 00:24:46.445 06:50:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:46.445 06:50:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:46.445 { 00:24:46.445 "params": { 00:24:46.445 "name": "Nvme$subsystem", 00:24:46.445 "trtype": "$TEST_TRANSPORT", 00:24:46.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:46.445 "adrfam": "ipv4", 00:24:46.445 "trsvcid": "$NVMF_PORT", 00:24:46.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:46.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:46.445 "hdgst": ${hdgst:-false}, 00:24:46.445 "ddgst": ${ddgst:-false} 00:24:46.445 }, 00:24:46.445 "method": "bdev_nvme_attach_controller" 00:24:46.445 } 00:24:46.445 EOF 00:24:46.445 )") 00:24:46.445 06:50:50 -- nvmf/common.sh@543 -- # cat 00:24:46.445 06:50:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:46.445 06:50:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:46.445 { 00:24:46.445 "params": { 00:24:46.445 "name": "Nvme$subsystem", 00:24:46.445 "trtype": "$TEST_TRANSPORT", 00:24:46.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:46.445 "adrfam": "ipv4", 00:24:46.445 "trsvcid": "$NVMF_PORT", 00:24:46.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:46.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:46.445 "hdgst": ${hdgst:-false}, 00:24:46.445 "ddgst": ${ddgst:-false} 00:24:46.445 }, 00:24:46.445 "method": "bdev_nvme_attach_controller" 00:24:46.445 } 00:24:46.445 EOF 00:24:46.445 )") 00:24:46.445 06:50:50 -- nvmf/common.sh@543 -- # cat 00:24:46.445 06:50:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:46.445 06:50:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:46.445 { 00:24:46.445 "params": { 00:24:46.445 "name": "Nvme$subsystem", 00:24:46.445 "trtype": "$TEST_TRANSPORT", 00:24:46.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:46.445 "adrfam": "ipv4", 00:24:46.445 "trsvcid": "$NVMF_PORT", 00:24:46.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:46.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:46.445 "hdgst": ${hdgst:-false}, 00:24:46.445 "ddgst": ${ddgst:-false} 00:24:46.445 }, 00:24:46.445 "method": "bdev_nvme_attach_controller" 00:24:46.445 } 00:24:46.445 EOF 00:24:46.445 )") 00:24:46.445 06:50:50 -- nvmf/common.sh@543 -- # cat 00:24:46.445 06:50:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:46.445 06:50:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:46.445 { 00:24:46.445 "params": { 00:24:46.445 "name": "Nvme$subsystem", 00:24:46.445 "trtype": "$TEST_TRANSPORT", 00:24:46.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:46.445 "adrfam": "ipv4", 00:24:46.445 "trsvcid": "$NVMF_PORT", 00:24:46.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:46.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:46.445 "hdgst": ${hdgst:-false}, 00:24:46.445 "ddgst": ${ddgst:-false} 00:24:46.445 }, 00:24:46.445 "method": "bdev_nvme_attach_controller" 00:24:46.445 } 00:24:46.445 EOF 00:24:46.445 )") 00:24:46.445 06:50:50 -- nvmf/common.sh@543 -- # cat 00:24:46.445 06:50:50 -- nvmf/common.sh@545 -- # jq . 00:24:46.445 06:50:50 -- nvmf/common.sh@546 -- # IFS=, 00:24:46.445 06:50:50 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:24:46.445 "params": { 00:24:46.445 "name": "Nvme1", 00:24:46.445 "trtype": "tcp", 00:24:46.445 "traddr": "10.0.0.2", 00:24:46.445 "adrfam": "ipv4", 00:24:46.445 "trsvcid": "4420", 00:24:46.445 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.445 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:46.445 "hdgst": false, 00:24:46.445 "ddgst": false 00:24:46.445 }, 00:24:46.445 "method": "bdev_nvme_attach_controller" 00:24:46.445 },{ 00:24:46.445 "params": { 00:24:46.445 "name": "Nvme2", 00:24:46.445 "trtype": "tcp", 00:24:46.445 "traddr": "10.0.0.2", 00:24:46.445 "adrfam": "ipv4", 00:24:46.445 "trsvcid": "4420", 00:24:46.445 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:46.445 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:46.445 "hdgst": false, 00:24:46.445 "ddgst": false 00:24:46.445 }, 00:24:46.445 "method": "bdev_nvme_attach_controller" 00:24:46.445 },{ 00:24:46.445 "params": { 00:24:46.445 "name": "Nvme3", 00:24:46.445 "trtype": "tcp", 00:24:46.445 "traddr": "10.0.0.2", 00:24:46.445 "adrfam": "ipv4", 00:24:46.445 "trsvcid": "4420", 00:24:46.445 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:46.445 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:46.445 "hdgst": false, 00:24:46.445 "ddgst": false 00:24:46.445 }, 00:24:46.445 "method": "bdev_nvme_attach_controller" 00:24:46.445 },{ 00:24:46.445 "params": { 00:24:46.445 "name": "Nvme4", 00:24:46.445 "trtype": "tcp", 00:24:46.445 "traddr": "10.0.0.2", 00:24:46.445 "adrfam": "ipv4", 00:24:46.445 "trsvcid": "4420", 00:24:46.445 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:46.445 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:46.445 "hdgst": false, 00:24:46.445 "ddgst": false 00:24:46.445 }, 00:24:46.445 "method": "bdev_nvme_attach_controller" 00:24:46.445 },{ 00:24:46.445 "params": { 00:24:46.445 "name": "Nvme5", 00:24:46.445 "trtype": "tcp", 00:24:46.445 "traddr": "10.0.0.2", 00:24:46.445 "adrfam": "ipv4", 00:24:46.445 "trsvcid": "4420", 00:24:46.445 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:46.445 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:46.445 "hdgst": false, 00:24:46.445 "ddgst": false 00:24:46.445 }, 00:24:46.445 "method": "bdev_nvme_attach_controller" 00:24:46.445 },{ 00:24:46.445 "params": { 00:24:46.445 "name": "Nvme6", 00:24:46.445 "trtype": "tcp", 00:24:46.445 "traddr": "10.0.0.2", 00:24:46.445 "adrfam": "ipv4", 00:24:46.445 "trsvcid": "4420", 00:24:46.445 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:46.445 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:46.445 "hdgst": false, 00:24:46.445 "ddgst": false 00:24:46.445 }, 00:24:46.445 "method": "bdev_nvme_attach_controller" 00:24:46.445 },{ 00:24:46.445 "params": { 00:24:46.445 "name": "Nvme7", 00:24:46.445 "trtype": "tcp", 00:24:46.445 "traddr": "10.0.0.2", 00:24:46.445 "adrfam": "ipv4", 00:24:46.445 "trsvcid": "4420", 00:24:46.445 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:46.445 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:46.445 "hdgst": false, 00:24:46.445 "ddgst": false 00:24:46.445 }, 00:24:46.445 "method": "bdev_nvme_attach_controller" 00:24:46.445 },{ 00:24:46.445 "params": { 00:24:46.445 "name": "Nvme8", 00:24:46.446 "trtype": "tcp", 00:24:46.446 "traddr": "10.0.0.2", 00:24:46.446 "adrfam": "ipv4", 00:24:46.446 "trsvcid": "4420", 00:24:46.446 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:46.446 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:46.446 "hdgst": false, 00:24:46.446 "ddgst": false 00:24:46.446 }, 00:24:46.446 "method": "bdev_nvme_attach_controller" 00:24:46.446 },{ 00:24:46.446 "params": { 00:24:46.446 "name": "Nvme9", 00:24:46.446 "trtype": "tcp", 00:24:46.446 "traddr": "10.0.0.2", 00:24:46.446 "adrfam": "ipv4", 00:24:46.446 "trsvcid": "4420", 00:24:46.446 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:46.446 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:46.446 "hdgst": false, 00:24:46.446 "ddgst": false 00:24:46.446 }, 00:24:46.446 "method": "bdev_nvme_attach_controller" 00:24:46.446 },{ 00:24:46.446 "params": { 00:24:46.446 "name": "Nvme10", 00:24:46.446 "trtype": "tcp", 00:24:46.446 "traddr": "10.0.0.2", 00:24:46.446 "adrfam": "ipv4", 00:24:46.446 "trsvcid": "4420", 00:24:46.446 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:46.446 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:46.446 "hdgst": false, 00:24:46.446 "ddgst": false 00:24:46.446 }, 00:24:46.446 "method": "bdev_nvme_attach_controller" 00:24:46.446 }' 00:24:46.446 [2024-04-17 06:50:50.961431] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:24:46.446 [2024-04-17 06:50:50.961537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56792 ] 00:24:46.446 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.446 [2024-04-17 06:50:51.029839] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.703 [2024-04-17 06:50:51.116035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.703 [2024-04-17 06:50:51.125318] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:24:48.600 Running I/O for 1 seconds... 00:24:49.535 00:24:49.535 Latency(us) 00:24:49.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.535 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:49.535 Verification LBA range: start 0x0 length 0x400 00:24:49.535 Nvme1n1 : 1.04 187.63 11.73 0.00 0.00 330658.10 22622.06 282727.16 00:24:49.535 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:49.535 Verification LBA range: start 0x0 length 0x400 00:24:49.535 Nvme2n1 : 1.15 222.51 13.91 0.00 0.00 279589.55 20971.52 273406.48 00:24:49.535 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:49.535 Verification LBA range: start 0x0 length 0x400 00:24:49.535 Nvme3n1 : 1.10 233.02 14.56 0.00 0.00 262789.50 19612.25 273406.48 00:24:49.535 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:49.535 Verification LBA range: start 0x0 length 0x400 00:24:49.535 Nvme4n1 : 1.08 177.14 11.07 0.00 0.00 339520.22 21359.88 302921.96 00:24:49.535 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:49.535 Verification LBA range: start 0x0 length 0x400 00:24:49.535 Nvme5n1 : 1.20 214.12 13.38 0.00 0.00 277936.73 22427.88 282727.16 00:24:49.535 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:49.535 Verification LBA range: start 0x0 length 0x400 00:24:49.535 Nvme6n1 : 1.17 221.66 13.85 0.00 0.00 262794.04 4781.70 296708.17 00:24:49.535 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:49.535 Verification LBA range: start 0x0 length 0x400 00:24:49.535 Nvme7n1 : 1.12 228.05 14.25 0.00 0.00 250672.55 20388.98 270299.59 00:24:49.535 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:49.535 Verification LBA range: start 0x0 length 0x400 00:24:49.535 Nvme8n1 : 1.20 212.66 13.29 0.00 0.00 266502.64 22039.51 340204.66 00:24:49.535 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:49.535 Verification LBA range: start 0x0 length 0x400 00:24:49.535 Nvme9n1 : 1.20 266.64 16.66 0.00 0.00 208554.82 19223.89 267192.70 00:24:49.535 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:49.535 Verification LBA range: start 0x0 length 0x400 00:24:49.535 Nvme10n1 : 1.18 216.15 13.51 0.00 0.00 252538.50 17670.45 273406.48 00:24:49.535 =================================================================================================================== 00:24:49.535 Total : 2179.59 136.22 0.00 0.00 268413.90 4781.70 340204.66 00:24:49.794 06:50:54 -- target/shutdown.sh@94 -- # stoptarget 00:24:49.794 06:50:54 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:49.794 06:50:54 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:49.794 06:50:54 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:49.794 06:50:54 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:49.794 06:50:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:49.794 06:50:54 -- nvmf/common.sh@117 -- # sync 00:24:49.794 06:50:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:49.794 06:50:54 -- nvmf/common.sh@120 -- # set +e 00:24:49.794 06:50:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:49.794 06:50:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:49.794 rmmod nvme_tcp 00:24:49.794 rmmod nvme_fabrics 00:24:49.794 rmmod nvme_keyring 00:24:49.794 06:50:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:49.794 06:50:54 -- nvmf/common.sh@124 -- # set -e 00:24:49.794 06:50:54 -- nvmf/common.sh@125 -- # return 0 00:24:49.794 06:50:54 -- nvmf/common.sh@478 -- # '[' -n 56277 ']' 00:24:49.794 06:50:54 -- nvmf/common.sh@479 -- # killprocess 56277 00:24:49.794 06:50:54 -- common/autotest_common.sh@936 -- # '[' -z 56277 ']' 00:24:49.794 06:50:54 -- common/autotest_common.sh@940 -- # kill -0 56277 00:24:49.794 06:50:54 -- common/autotest_common.sh@941 -- # uname 00:24:49.794 06:50:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:49.794 06:50:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56277 00:24:50.086 06:50:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:50.086 06:50:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:50.086 06:50:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56277' 00:24:50.086 killing process with pid 56277 00:24:50.086 06:50:54 -- common/autotest_common.sh@955 -- # kill 56277 00:24:50.086 06:50:54 -- common/autotest_common.sh@960 -- # wait 56277 00:24:50.344 06:50:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:50.344 06:50:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:50.344 06:50:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:50.344 06:50:54 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:50.344 06:50:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:50.344 06:50:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.344 06:50:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:50.344 06:50:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.876 06:50:56 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:52.876 00:24:52.876 real 0m11.873s 00:24:52.876 user 0m35.045s 00:24:52.876 sys 0m3.151s 00:24:52.876 06:50:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:52.876 06:50:56 -- common/autotest_common.sh@10 -- # set +x 00:24:52.876 ************************************ 00:24:52.876 END TEST nvmf_shutdown_tc1 00:24:52.876 ************************************ 00:24:52.876 06:50:56 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:52.876 06:50:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:52.876 06:50:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:52.876 06:50:56 -- common/autotest_common.sh@10 -- # set +x 00:24:52.876 ************************************ 00:24:52.876 START TEST nvmf_shutdown_tc2 00:24:52.876 ************************************ 00:24:52.876 06:50:57 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:24:52.876 06:50:57 -- target/shutdown.sh@99 -- # starttarget 00:24:52.876 06:50:57 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:52.876 06:50:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:52.876 06:50:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:52.876 06:50:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:52.876 06:50:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:52.876 06:50:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:52.876 06:50:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.876 06:50:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:52.876 06:50:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.876 06:50:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:52.876 06:50:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:52.876 06:50:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:52.876 06:50:57 -- common/autotest_common.sh@10 -- # set +x 00:24:52.876 06:50:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:52.876 06:50:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:52.876 06:50:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:52.876 06:50:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:52.876 06:50:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:52.876 06:50:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:52.876 06:50:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:52.876 06:50:57 -- nvmf/common.sh@295 -- # net_devs=() 00:24:52.876 06:50:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:52.876 06:50:57 -- nvmf/common.sh@296 -- # e810=() 00:24:52.876 06:50:57 -- nvmf/common.sh@296 -- # local -ga e810 00:24:52.876 06:50:57 -- nvmf/common.sh@297 -- # x722=() 00:24:52.876 06:50:57 -- nvmf/common.sh@297 -- # local -ga x722 00:24:52.876 06:50:57 -- nvmf/common.sh@298 -- # mlx=() 00:24:52.876 06:50:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:52.876 06:50:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:52.876 06:50:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:52.876 06:50:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:52.876 06:50:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:52.876 06:50:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:52.876 06:50:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:52.876 06:50:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:52.876 06:50:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:52.876 06:50:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:52.876 06:50:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:52.876 06:50:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:52.876 06:50:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:52.876 06:50:57 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:52.876 06:50:57 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:52.876 06:50:57 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:52.876 06:50:57 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:52.876 06:50:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:52.876 06:50:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:52.876 06:50:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:52.876 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:52.876 06:50:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:52.876 06:50:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:52.876 06:50:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.876 06:50:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.876 06:50:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:52.876 06:50:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:52.876 06:50:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:52.876 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:52.876 06:50:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:52.876 06:50:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:52.876 06:50:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.876 06:50:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.876 06:50:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:52.876 06:50:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:52.876 06:50:57 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:52.876 06:50:57 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:52.876 06:50:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:52.876 06:50:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.876 06:50:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:52.876 06:50:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.876 06:50:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:52.876 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:52.876 06:50:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.876 06:50:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:52.876 06:50:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.876 06:50:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:52.876 06:50:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.876 06:50:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:52.876 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:52.876 06:50:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.876 06:50:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:52.876 06:50:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:52.876 06:50:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:52.876 06:50:57 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:52.876 06:50:57 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:52.876 06:50:57 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:52.876 06:50:57 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:52.876 06:50:57 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:52.876 06:50:57 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:52.876 06:50:57 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:52.876 06:50:57 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:52.876 06:50:57 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:52.876 06:50:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:52.876 06:50:57 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:52.876 06:50:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:52.876 06:50:57 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:52.876 06:50:57 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:52.876 06:50:57 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:52.876 06:50:57 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:52.876 06:50:57 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:52.876 06:50:57 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:52.876 06:50:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:52.876 06:50:57 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:52.876 06:50:57 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:52.876 06:50:57 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:52.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:52.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:24:52.876 00:24:52.876 --- 10.0.0.2 ping statistics --- 00:24:52.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.876 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:24:52.876 06:50:57 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:52.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:52.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:24:52.876 00:24:52.876 --- 10.0.0.1 ping statistics --- 00:24:52.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.876 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:24:52.876 06:50:57 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:52.876 06:50:57 -- nvmf/common.sh@411 -- # return 0 00:24:52.876 06:50:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:52.876 06:50:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:52.876 06:50:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:52.876 06:50:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:52.876 06:50:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:52.876 06:50:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:52.876 06:50:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:52.877 06:50:57 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:52.877 06:50:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:52.877 06:50:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:52.877 06:50:57 -- common/autotest_common.sh@10 -- # set +x 00:24:52.877 06:50:57 -- nvmf/common.sh@470 -- # nvmfpid=57652 00:24:52.877 06:50:57 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:52.877 06:50:57 -- nvmf/common.sh@471 -- # waitforlisten 57652 00:24:52.877 06:50:57 -- common/autotest_common.sh@817 -- # '[' -z 57652 ']' 00:24:52.877 06:50:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.877 06:50:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:52.877 06:50:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.877 06:50:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:52.877 06:50:57 -- common/autotest_common.sh@10 -- # set +x 00:24:52.877 [2024-04-17 06:50:57.209172] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:24:52.877 [2024-04-17 06:50:57.209262] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:52.877 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.877 [2024-04-17 06:50:57.273887] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:52.877 [2024-04-17 06:50:57.361998] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:52.877 [2024-04-17 06:50:57.362054] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:52.877 [2024-04-17 06:50:57.362068] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:52.877 [2024-04-17 06:50:57.362080] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:52.877 [2024-04-17 06:50:57.362089] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:52.877 [2024-04-17 06:50:57.362193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:52.877 [2024-04-17 06:50:57.362254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:52.877 [2024-04-17 06:50:57.362321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:52.877 [2024-04-17 06:50:57.362324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.135 06:50:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:53.135 06:50:57 -- common/autotest_common.sh@850 -- # return 0 00:24:53.135 06:50:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:53.135 06:50:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:53.135 06:50:57 -- common/autotest_common.sh@10 -- # set +x 00:24:53.135 06:50:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:53.135 06:50:57 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:53.135 06:50:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.135 06:50:57 -- common/autotest_common.sh@10 -- # set +x 00:24:53.135 [2024-04-17 06:50:57.517961] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:53.135 06:50:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.135 06:50:57 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:53.135 06:50:57 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:53.135 06:50:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:53.135 06:50:57 -- common/autotest_common.sh@10 -- # set +x 00:24:53.135 06:50:57 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:53.135 06:50:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:53.135 06:50:57 -- target/shutdown.sh@28 -- # cat 00:24:53.135 06:50:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:53.135 06:50:57 -- target/shutdown.sh@28 -- # cat 00:24:53.135 06:50:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:53.135 06:50:57 -- target/shutdown.sh@28 -- # cat 00:24:53.135 06:50:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:53.135 06:50:57 -- target/shutdown.sh@28 -- # cat 00:24:53.135 06:50:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:53.135 06:50:57 -- target/shutdown.sh@28 -- # cat 00:24:53.135 06:50:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:53.135 06:50:57 -- target/shutdown.sh@28 -- # cat 00:24:53.135 06:50:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:53.135 06:50:57 -- target/shutdown.sh@28 -- # cat 00:24:53.135 06:50:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:53.135 06:50:57 -- target/shutdown.sh@28 -- # cat 00:24:53.135 06:50:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:53.135 06:50:57 -- target/shutdown.sh@28 -- # cat 00:24:53.135 06:50:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:53.135 06:50:57 -- target/shutdown.sh@28 -- # cat 00:24:53.135 06:50:57 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:53.135 06:50:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.135 06:50:57 -- common/autotest_common.sh@10 -- # set +x 00:24:53.135 Malloc1 00:24:53.135 [2024-04-17 06:50:57.603626] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:53.135 Malloc2 00:24:53.135 Malloc3 00:24:53.135 Malloc4 00:24:53.392 Malloc5 00:24:53.392 Malloc6 00:24:53.392 Malloc7 00:24:53.392 Malloc8 00:24:53.392 Malloc9 00:24:53.651 Malloc10 00:24:53.651 06:50:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.651 06:50:58 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:53.651 06:50:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:53.651 06:50:58 -- common/autotest_common.sh@10 -- # set +x 00:24:53.651 06:50:58 -- target/shutdown.sh@103 -- # perfpid=57828 00:24:53.651 06:50:58 -- target/shutdown.sh@104 -- # waitforlisten 57828 /var/tmp/bdevperf.sock 00:24:53.651 06:50:58 -- common/autotest_common.sh@817 -- # '[' -z 57828 ']' 00:24:53.651 06:50:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:53.651 06:50:58 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:53.651 06:50:58 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:53.651 06:50:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:53.651 06:50:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:53.651 06:50:58 -- nvmf/common.sh@521 -- # config=() 00:24:53.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:53.651 06:50:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:53.651 06:50:58 -- nvmf/common.sh@521 -- # local subsystem config 00:24:53.651 06:50:58 -- common/autotest_common.sh@10 -- # set +x 00:24:53.651 06:50:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:53.651 06:50:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:53.651 { 00:24:53.651 "params": { 00:24:53.651 "name": "Nvme$subsystem", 00:24:53.651 "trtype": "$TEST_TRANSPORT", 00:24:53.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:53.651 "adrfam": "ipv4", 00:24:53.651 "trsvcid": "$NVMF_PORT", 00:24:53.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:53.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:53.651 "hdgst": ${hdgst:-false}, 00:24:53.651 "ddgst": ${ddgst:-false} 00:24:53.651 }, 00:24:53.651 "method": "bdev_nvme_attach_controller" 00:24:53.651 } 00:24:53.651 EOF 00:24:53.651 )") 00:24:53.651 06:50:58 -- nvmf/common.sh@543 -- # cat 00:24:53.651 06:50:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:53.651 06:50:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:53.651 { 00:24:53.651 "params": { 00:24:53.651 "name": "Nvme$subsystem", 00:24:53.651 "trtype": "$TEST_TRANSPORT", 00:24:53.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:53.651 "adrfam": "ipv4", 00:24:53.651 "trsvcid": "$NVMF_PORT", 00:24:53.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:53.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:53.651 "hdgst": ${hdgst:-false}, 00:24:53.651 "ddgst": ${ddgst:-false} 00:24:53.651 }, 00:24:53.651 "method": "bdev_nvme_attach_controller" 00:24:53.651 } 00:24:53.651 EOF 00:24:53.651 )") 00:24:53.651 06:50:58 -- nvmf/common.sh@543 -- # cat 00:24:53.651 06:50:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:53.651 06:50:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:53.651 { 00:24:53.651 "params": { 00:24:53.651 "name": "Nvme$subsystem", 00:24:53.651 "trtype": "$TEST_TRANSPORT", 00:24:53.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:53.651 "adrfam": "ipv4", 00:24:53.651 "trsvcid": "$NVMF_PORT", 00:24:53.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:53.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:53.651 "hdgst": ${hdgst:-false}, 00:24:53.651 "ddgst": ${ddgst:-false} 00:24:53.651 }, 00:24:53.651 "method": "bdev_nvme_attach_controller" 00:24:53.651 } 00:24:53.651 EOF 00:24:53.651 )") 00:24:53.651 06:50:58 -- nvmf/common.sh@543 -- # cat 00:24:53.651 06:50:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:53.651 06:50:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:53.651 { 00:24:53.651 "params": { 00:24:53.651 "name": "Nvme$subsystem", 00:24:53.651 "trtype": "$TEST_TRANSPORT", 00:24:53.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:53.651 "adrfam": "ipv4", 00:24:53.651 "trsvcid": "$NVMF_PORT", 00:24:53.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:53.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:53.651 "hdgst": ${hdgst:-false}, 00:24:53.651 "ddgst": ${ddgst:-false} 00:24:53.651 }, 00:24:53.651 "method": "bdev_nvme_attach_controller" 00:24:53.651 } 00:24:53.651 EOF 00:24:53.651 )") 00:24:53.651 06:50:58 -- nvmf/common.sh@543 -- # cat 00:24:53.651 06:50:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:53.651 06:50:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:53.651 { 00:24:53.651 "params": { 00:24:53.651 "name": "Nvme$subsystem", 00:24:53.651 "trtype": "$TEST_TRANSPORT", 00:24:53.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:53.651 "adrfam": "ipv4", 00:24:53.651 "trsvcid": "$NVMF_PORT", 00:24:53.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:53.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:53.651 "hdgst": ${hdgst:-false}, 00:24:53.651 "ddgst": ${ddgst:-false} 00:24:53.651 }, 00:24:53.651 "method": "bdev_nvme_attach_controller" 00:24:53.651 } 00:24:53.651 EOF 00:24:53.651 )") 00:24:53.651 06:50:58 -- nvmf/common.sh@543 -- # cat 00:24:53.651 06:50:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:53.651 06:50:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:53.651 { 00:24:53.651 "params": { 00:24:53.651 "name": "Nvme$subsystem", 00:24:53.651 "trtype": "$TEST_TRANSPORT", 00:24:53.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:53.651 "adrfam": "ipv4", 00:24:53.651 "trsvcid": "$NVMF_PORT", 00:24:53.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:53.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:53.651 "hdgst": ${hdgst:-false}, 00:24:53.651 "ddgst": ${ddgst:-false} 00:24:53.651 }, 00:24:53.651 "method": "bdev_nvme_attach_controller" 00:24:53.651 } 00:24:53.651 EOF 00:24:53.651 )") 00:24:53.651 06:50:58 -- nvmf/common.sh@543 -- # cat 00:24:53.651 06:50:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:53.651 06:50:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:53.651 { 00:24:53.651 "params": { 00:24:53.651 "name": "Nvme$subsystem", 00:24:53.651 "trtype": "$TEST_TRANSPORT", 00:24:53.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:53.651 "adrfam": "ipv4", 00:24:53.651 "trsvcid": "$NVMF_PORT", 00:24:53.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:53.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:53.651 "hdgst": ${hdgst:-false}, 00:24:53.651 "ddgst": ${ddgst:-false} 00:24:53.651 }, 00:24:53.651 "method": "bdev_nvme_attach_controller" 00:24:53.651 } 00:24:53.651 EOF 00:24:53.651 )") 00:24:53.651 06:50:58 -- nvmf/common.sh@543 -- # cat 00:24:53.651 06:50:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:53.651 06:50:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:53.651 { 00:24:53.651 "params": { 00:24:53.651 "name": "Nvme$subsystem", 00:24:53.651 "trtype": "$TEST_TRANSPORT", 00:24:53.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:53.651 "adrfam": "ipv4", 00:24:53.651 "trsvcid": "$NVMF_PORT", 00:24:53.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:53.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:53.651 "hdgst": ${hdgst:-false}, 00:24:53.651 "ddgst": ${ddgst:-false} 00:24:53.651 }, 00:24:53.651 "method": "bdev_nvme_attach_controller" 00:24:53.651 } 00:24:53.651 EOF 00:24:53.651 )") 00:24:53.651 06:50:58 -- nvmf/common.sh@543 -- # cat 00:24:53.651 06:50:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:53.651 06:50:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:53.651 { 00:24:53.651 "params": { 00:24:53.651 "name": "Nvme$subsystem", 00:24:53.651 "trtype": "$TEST_TRANSPORT", 00:24:53.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:53.651 "adrfam": "ipv4", 00:24:53.651 "trsvcid": "$NVMF_PORT", 00:24:53.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:53.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:53.651 "hdgst": ${hdgst:-false}, 00:24:53.651 "ddgst": ${ddgst:-false} 00:24:53.651 }, 00:24:53.651 "method": "bdev_nvme_attach_controller" 00:24:53.651 } 00:24:53.651 EOF 00:24:53.651 )") 00:24:53.651 06:50:58 -- nvmf/common.sh@543 -- # cat 00:24:53.651 06:50:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:53.651 06:50:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:53.651 { 00:24:53.651 "params": { 00:24:53.651 "name": "Nvme$subsystem", 00:24:53.651 "trtype": "$TEST_TRANSPORT", 00:24:53.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:53.651 "adrfam": "ipv4", 00:24:53.651 "trsvcid": "$NVMF_PORT", 00:24:53.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:53.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:53.651 "hdgst": ${hdgst:-false}, 00:24:53.651 "ddgst": ${ddgst:-false} 00:24:53.651 }, 00:24:53.651 "method": "bdev_nvme_attach_controller" 00:24:53.651 } 00:24:53.651 EOF 00:24:53.651 )") 00:24:53.651 06:50:58 -- nvmf/common.sh@543 -- # cat 00:24:53.651 06:50:58 -- nvmf/common.sh@545 -- # jq . 00:24:53.651 06:50:58 -- nvmf/common.sh@546 -- # IFS=, 00:24:53.651 06:50:58 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:24:53.651 "params": { 00:24:53.651 "name": "Nvme1", 00:24:53.651 "trtype": "tcp", 00:24:53.651 "traddr": "10.0.0.2", 00:24:53.651 "adrfam": "ipv4", 00:24:53.651 "trsvcid": "4420", 00:24:53.651 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:53.651 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:53.651 "hdgst": false, 00:24:53.651 "ddgst": false 00:24:53.651 }, 00:24:53.651 "method": "bdev_nvme_attach_controller" 00:24:53.651 },{ 00:24:53.651 "params": { 00:24:53.651 "name": "Nvme2", 00:24:53.651 "trtype": "tcp", 00:24:53.651 "traddr": "10.0.0.2", 00:24:53.651 "adrfam": "ipv4", 00:24:53.651 "trsvcid": "4420", 00:24:53.651 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:53.651 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:53.651 "hdgst": false, 00:24:53.651 "ddgst": false 00:24:53.651 }, 00:24:53.651 "method": "bdev_nvme_attach_controller" 00:24:53.651 },{ 00:24:53.651 "params": { 00:24:53.651 "name": "Nvme3", 00:24:53.651 "trtype": "tcp", 00:24:53.651 "traddr": "10.0.0.2", 00:24:53.651 "adrfam": "ipv4", 00:24:53.651 "trsvcid": "4420", 00:24:53.651 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:53.651 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:53.651 "hdgst": false, 00:24:53.651 "ddgst": false 00:24:53.651 }, 00:24:53.651 "method": "bdev_nvme_attach_controller" 00:24:53.651 },{ 00:24:53.651 "params": { 00:24:53.651 "name": "Nvme4", 00:24:53.651 "trtype": "tcp", 00:24:53.651 "traddr": "10.0.0.2", 00:24:53.651 "adrfam": "ipv4", 00:24:53.651 "trsvcid": "4420", 00:24:53.651 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:53.651 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:53.651 "hdgst": false, 00:24:53.651 "ddgst": false 00:24:53.651 }, 00:24:53.651 "method": "bdev_nvme_attach_controller" 00:24:53.651 },{ 00:24:53.651 "params": { 00:24:53.651 "name": "Nvme5", 00:24:53.651 "trtype": "tcp", 00:24:53.651 "traddr": "10.0.0.2", 00:24:53.651 "adrfam": "ipv4", 00:24:53.651 "trsvcid": "4420", 00:24:53.651 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:53.651 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:53.651 "hdgst": false, 00:24:53.651 "ddgst": false 00:24:53.651 }, 00:24:53.651 "method": "bdev_nvme_attach_controller" 00:24:53.651 },{ 00:24:53.651 "params": { 00:24:53.651 "name": "Nvme6", 00:24:53.651 "trtype": "tcp", 00:24:53.651 "traddr": "10.0.0.2", 00:24:53.651 "adrfam": "ipv4", 00:24:53.651 "trsvcid": "4420", 00:24:53.651 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:53.651 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:53.651 "hdgst": false, 00:24:53.651 "ddgst": false 00:24:53.651 }, 00:24:53.651 "method": "bdev_nvme_attach_controller" 00:24:53.651 },{ 00:24:53.651 "params": { 00:24:53.651 "name": "Nvme7", 00:24:53.651 "trtype": "tcp", 00:24:53.651 "traddr": "10.0.0.2", 00:24:53.651 "adrfam": "ipv4", 00:24:53.651 "trsvcid": "4420", 00:24:53.651 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:53.651 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:53.651 "hdgst": false, 00:24:53.651 "ddgst": false 00:24:53.651 }, 00:24:53.651 "method": "bdev_nvme_attach_controller" 00:24:53.651 },{ 00:24:53.651 "params": { 00:24:53.651 "name": "Nvme8", 00:24:53.651 "trtype": "tcp", 00:24:53.651 "traddr": "10.0.0.2", 00:24:53.651 "adrfam": "ipv4", 00:24:53.651 "trsvcid": "4420", 00:24:53.651 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:53.651 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:53.651 "hdgst": false, 00:24:53.651 "ddgst": false 00:24:53.651 }, 00:24:53.651 "method": "bdev_nvme_attach_controller" 00:24:53.651 },{ 00:24:53.651 "params": { 00:24:53.651 "name": "Nvme9", 00:24:53.651 "trtype": "tcp", 00:24:53.651 "traddr": "10.0.0.2", 00:24:53.651 "adrfam": "ipv4", 00:24:53.651 "trsvcid": "4420", 00:24:53.651 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:53.651 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:53.651 "hdgst": false, 00:24:53.651 "ddgst": false 00:24:53.651 }, 00:24:53.651 "method": "bdev_nvme_attach_controller" 00:24:53.651 },{ 00:24:53.651 "params": { 00:24:53.651 "name": "Nvme10", 00:24:53.651 "trtype": "tcp", 00:24:53.651 "traddr": "10.0.0.2", 00:24:53.651 "adrfam": "ipv4", 00:24:53.651 "trsvcid": "4420", 00:24:53.651 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:53.651 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:53.651 "hdgst": false, 00:24:53.651 "ddgst": false 00:24:53.652 }, 00:24:53.652 "method": "bdev_nvme_attach_controller" 00:24:53.652 }' 00:24:53.652 [2024-04-17 06:50:58.102640] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:24:53.652 [2024-04-17 06:50:58.102730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57828 ] 00:24:53.652 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.652 [2024-04-17 06:50:58.167227] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.652 [2024-04-17 06:50:58.252418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.549 Running I/O for 10 seconds... 00:24:55.807 06:51:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:55.807 06:51:00 -- common/autotest_common.sh@850 -- # return 0 00:24:55.807 06:51:00 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:55.807 06:51:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:55.807 06:51:00 -- common/autotest_common.sh@10 -- # set +x 00:24:55.807 06:51:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:55.807 06:51:00 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:55.807 06:51:00 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:55.807 06:51:00 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:55.807 06:51:00 -- target/shutdown.sh@57 -- # local ret=1 00:24:55.807 06:51:00 -- target/shutdown.sh@58 -- # local i 00:24:55.807 06:51:00 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:55.807 06:51:00 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:55.807 06:51:00 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:55.807 06:51:00 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:55.807 06:51:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:55.807 06:51:00 -- common/autotest_common.sh@10 -- # set +x 00:24:55.807 06:51:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:55.807 06:51:00 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:55.807 06:51:00 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:55.807 06:51:00 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:56.064 06:51:00 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:56.065 06:51:00 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:56.065 06:51:00 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:56.065 06:51:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.065 06:51:00 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:56.065 06:51:00 -- common/autotest_common.sh@10 -- # set +x 00:24:56.065 06:51:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.065 06:51:00 -- target/shutdown.sh@60 -- # read_io_count=67 00:24:56.065 06:51:00 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:24:56.065 06:51:00 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:56.322 06:51:00 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:56.322 06:51:00 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:56.322 06:51:00 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:56.322 06:51:00 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:56.322 06:51:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.322 06:51:00 -- common/autotest_common.sh@10 -- # set +x 00:24:56.322 06:51:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.322 06:51:00 -- target/shutdown.sh@60 -- # read_io_count=131 00:24:56.323 06:51:00 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:24:56.323 06:51:00 -- target/shutdown.sh@64 -- # ret=0 00:24:56.323 06:51:00 -- target/shutdown.sh@65 -- # break 00:24:56.323 06:51:00 -- target/shutdown.sh@69 -- # return 0 00:24:56.323 06:51:00 -- target/shutdown.sh@110 -- # killprocess 57828 00:24:56.323 06:51:00 -- common/autotest_common.sh@936 -- # '[' -z 57828 ']' 00:24:56.323 06:51:00 -- common/autotest_common.sh@940 -- # kill -0 57828 00:24:56.323 06:51:00 -- common/autotest_common.sh@941 -- # uname 00:24:56.323 06:51:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:56.323 06:51:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57828 00:24:56.323 06:51:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:56.323 06:51:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:56.323 06:51:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57828' 00:24:56.323 killing process with pid 57828 00:24:56.323 06:51:00 -- common/autotest_common.sh@955 -- # kill 57828 00:24:56.323 06:51:00 -- common/autotest_common.sh@960 -- # wait 57828 00:24:56.581 Received shutdown signal, test time was about 0.961090 seconds 00:24:56.581 00:24:56.581 Latency(us) 00:24:56.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.581 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:56.581 Verification LBA range: start 0x0 length 0x400 00:24:56.581 Nvme1n1 : 0.91 223.20 13.95 0.00 0.00 278017.92 11505.21 259425.47 00:24:56.581 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:56.581 Verification LBA range: start 0x0 length 0x400 00:24:56.581 Nvme2n1 : 0.94 203.23 12.70 0.00 0.00 305003.33 40195.41 273406.48 00:24:56.581 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:56.581 Verification LBA range: start 0x0 length 0x400 00:24:56.581 Nvme3n1 : 0.90 212.35 13.27 0.00 0.00 285319.02 23787.14 278066.82 00:24:56.581 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:56.581 Verification LBA range: start 0x0 length 0x400 00:24:56.581 Nvme4n1 : 0.92 208.34 13.02 0.00 0.00 284829.27 19126.80 292047.83 00:24:56.581 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:56.581 Verification LBA range: start 0x0 length 0x400 00:24:56.581 Nvme5n1 : 0.94 204.42 12.78 0.00 0.00 284402.98 23981.32 296708.17 00:24:56.581 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:56.581 Verification LBA range: start 0x0 length 0x400 00:24:56.581 Nvme6n1 : 0.95 201.61 12.60 0.00 0.00 281011.90 25243.50 307582.29 00:24:56.581 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:56.581 Verification LBA range: start 0x0 length 0x400 00:24:56.581 Nvme7n1 : 0.92 208.98 13.06 0.00 0.00 265588.18 30680.56 273406.48 00:24:56.581 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:56.581 Verification LBA range: start 0x0 length 0x400 00:24:56.581 Nvme8n1 : 0.90 213.35 13.33 0.00 0.00 253349.23 18641.35 242337.56 00:24:56.581 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:56.581 Verification LBA range: start 0x0 length 0x400 00:24:56.581 Nvme9n1 : 0.93 206.08 12.88 0.00 0.00 257662.42 21554.06 285834.05 00:24:56.581 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:56.581 Verification LBA range: start 0x0 length 0x400 00:24:56.581 Nvme10n1 : 0.96 199.94 12.50 0.00 0.00 261279.67 11408.12 337097.77 00:24:56.581 =================================================================================================================== 00:24:56.581 Total : 2081.50 130.09 0.00 0.00 275661.12 11408.12 337097.77 00:24:56.581 06:51:01 -- target/shutdown.sh@113 -- # sleep 1 00:24:57.952 06:51:02 -- target/shutdown.sh@114 -- # kill -0 57652 00:24:57.952 06:51:02 -- target/shutdown.sh@116 -- # stoptarget 00:24:57.952 06:51:02 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:57.952 06:51:02 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:57.952 06:51:02 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:57.952 06:51:02 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:57.952 06:51:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:57.952 06:51:02 -- nvmf/common.sh@117 -- # sync 00:24:57.952 06:51:02 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:57.952 06:51:02 -- nvmf/common.sh@120 -- # set +e 00:24:57.952 06:51:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:57.952 06:51:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:57.952 rmmod nvme_tcp 00:24:57.952 rmmod nvme_fabrics 00:24:57.952 rmmod nvme_keyring 00:24:57.952 06:51:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:57.952 06:51:02 -- nvmf/common.sh@124 -- # set -e 00:24:57.952 06:51:02 -- nvmf/common.sh@125 -- # return 0 00:24:57.952 06:51:02 -- nvmf/common.sh@478 -- # '[' -n 57652 ']' 00:24:57.952 06:51:02 -- nvmf/common.sh@479 -- # killprocess 57652 00:24:57.952 06:51:02 -- common/autotest_common.sh@936 -- # '[' -z 57652 ']' 00:24:57.952 06:51:02 -- common/autotest_common.sh@940 -- # kill -0 57652 00:24:57.952 06:51:02 -- common/autotest_common.sh@941 -- # uname 00:24:57.952 06:51:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:57.952 06:51:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57652 00:24:57.952 06:51:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:57.952 06:51:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:57.952 06:51:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57652' 00:24:57.952 killing process with pid 57652 00:24:57.952 06:51:02 -- common/autotest_common.sh@955 -- # kill 57652 00:24:57.952 06:51:02 -- common/autotest_common.sh@960 -- # wait 57652 00:24:58.210 06:51:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:58.210 06:51:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:58.210 06:51:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:58.210 06:51:02 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:58.210 06:51:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:58.210 06:51:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.210 06:51:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:58.210 06:51:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.742 06:51:04 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:00.742 00:25:00.742 real 0m7.789s 00:25:00.742 user 0m23.504s 00:25:00.742 sys 0m1.604s 00:25:00.742 06:51:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:00.742 06:51:04 -- common/autotest_common.sh@10 -- # set +x 00:25:00.742 ************************************ 00:25:00.742 END TEST nvmf_shutdown_tc2 00:25:00.742 ************************************ 00:25:00.742 06:51:04 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:00.742 06:51:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:00.742 06:51:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:00.742 06:51:04 -- common/autotest_common.sh@10 -- # set +x 00:25:00.742 ************************************ 00:25:00.742 START TEST nvmf_shutdown_tc3 00:25:00.742 ************************************ 00:25:00.742 06:51:04 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:25:00.742 06:51:04 -- target/shutdown.sh@121 -- # starttarget 00:25:00.742 06:51:04 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:00.742 06:51:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:00.742 06:51:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:00.742 06:51:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:00.742 06:51:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:00.742 06:51:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:00.742 06:51:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.742 06:51:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:00.742 06:51:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.742 06:51:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:00.742 06:51:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:00.742 06:51:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:00.742 06:51:04 -- common/autotest_common.sh@10 -- # set +x 00:25:00.742 06:51:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:00.742 06:51:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:00.742 06:51:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:00.742 06:51:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:00.742 06:51:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:00.742 06:51:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:00.742 06:51:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:00.742 06:51:04 -- nvmf/common.sh@295 -- # net_devs=() 00:25:00.742 06:51:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:00.742 06:51:04 -- nvmf/common.sh@296 -- # e810=() 00:25:00.742 06:51:04 -- nvmf/common.sh@296 -- # local -ga e810 00:25:00.742 06:51:04 -- nvmf/common.sh@297 -- # x722=() 00:25:00.742 06:51:04 -- nvmf/common.sh@297 -- # local -ga x722 00:25:00.742 06:51:04 -- nvmf/common.sh@298 -- # mlx=() 00:25:00.742 06:51:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:00.742 06:51:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:00.742 06:51:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:00.742 06:51:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:00.742 06:51:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:00.742 06:51:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:00.742 06:51:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:00.742 06:51:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:00.742 06:51:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:00.742 06:51:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:00.742 06:51:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:00.742 06:51:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:00.742 06:51:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:00.742 06:51:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:00.742 06:51:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:00.742 06:51:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:00.742 06:51:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:00.742 06:51:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:00.742 06:51:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:00.742 06:51:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:00.742 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:00.742 06:51:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:00.742 06:51:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:00.742 06:51:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.742 06:51:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.742 06:51:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:00.742 06:51:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:00.742 06:51:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:00.742 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:00.742 06:51:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:00.742 06:51:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:00.742 06:51:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.742 06:51:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.742 06:51:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:00.742 06:51:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:00.742 06:51:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:00.742 06:51:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:00.742 06:51:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:00.742 06:51:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.742 06:51:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:00.742 06:51:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.742 06:51:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:00.742 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:00.742 06:51:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.742 06:51:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:00.742 06:51:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.742 06:51:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:00.742 06:51:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.742 06:51:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:00.742 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:00.742 06:51:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.742 06:51:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:00.742 06:51:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:00.742 06:51:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:00.742 06:51:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:00.742 06:51:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:00.742 06:51:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:00.742 06:51:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:00.742 06:51:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:00.742 06:51:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:00.742 06:51:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:00.742 06:51:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:00.742 06:51:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:00.742 06:51:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:00.742 06:51:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:00.742 06:51:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:00.742 06:51:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:00.742 06:51:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:00.742 06:51:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:00.742 06:51:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:00.742 06:51:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:00.742 06:51:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:00.742 06:51:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:00.742 06:51:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:00.742 06:51:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:00.742 06:51:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:00.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:25:00.742 00:25:00.742 --- 10.0.0.2 ping statistics --- 00:25:00.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.742 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:25:00.742 06:51:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:00.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:25:00.742 00:25:00.742 --- 10.0.0.1 ping statistics --- 00:25:00.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.742 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:25:00.742 06:51:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.742 06:51:05 -- nvmf/common.sh@411 -- # return 0 00:25:00.742 06:51:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:00.742 06:51:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.742 06:51:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:00.742 06:51:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:00.742 06:51:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.742 06:51:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:00.743 06:51:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:00.743 06:51:05 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:00.743 06:51:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:00.743 06:51:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:00.743 06:51:05 -- common/autotest_common.sh@10 -- # set +x 00:25:00.743 06:51:05 -- nvmf/common.sh@470 -- # nvmfpid=58858 00:25:00.743 06:51:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:00.743 06:51:05 -- nvmf/common.sh@471 -- # waitforlisten 58858 00:25:00.743 06:51:05 -- common/autotest_common.sh@817 -- # '[' -z 58858 ']' 00:25:00.743 06:51:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.743 06:51:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:00.743 06:51:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.743 06:51:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:00.743 06:51:05 -- common/autotest_common.sh@10 -- # set +x 00:25:00.743 [2024-04-17 06:51:05.155652] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:25:00.743 [2024-04-17 06:51:05.155730] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.743 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.743 [2024-04-17 06:51:05.227399] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:00.743 [2024-04-17 06:51:05.319061] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.743 [2024-04-17 06:51:05.319114] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.743 [2024-04-17 06:51:05.319128] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.743 [2024-04-17 06:51:05.319140] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.743 [2024-04-17 06:51:05.319149] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.743 [2024-04-17 06:51:05.319211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:00.743 [2024-04-17 06:51:05.319280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:00.743 [2024-04-17 06:51:05.319344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:00.743 [2024-04-17 06:51:05.319347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:01.001 06:51:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:01.001 06:51:05 -- common/autotest_common.sh@850 -- # return 0 00:25:01.001 06:51:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:01.001 06:51:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:01.001 06:51:05 -- common/autotest_common.sh@10 -- # set +x 00:25:01.001 06:51:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:01.001 06:51:05 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:01.001 06:51:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.001 06:51:05 -- common/autotest_common.sh@10 -- # set +x 00:25:01.001 [2024-04-17 06:51:05.461813] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.001 06:51:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.001 06:51:05 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:01.001 06:51:05 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:01.001 06:51:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:01.001 06:51:05 -- common/autotest_common.sh@10 -- # set +x 00:25:01.001 06:51:05 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:01.001 06:51:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:01.001 06:51:05 -- target/shutdown.sh@28 -- # cat 00:25:01.001 06:51:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:01.001 06:51:05 -- target/shutdown.sh@28 -- # cat 00:25:01.001 06:51:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:01.001 06:51:05 -- target/shutdown.sh@28 -- # cat 00:25:01.001 06:51:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:01.001 06:51:05 -- target/shutdown.sh@28 -- # cat 00:25:01.001 06:51:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:01.001 06:51:05 -- target/shutdown.sh@28 -- # cat 00:25:01.001 06:51:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:01.001 06:51:05 -- target/shutdown.sh@28 -- # cat 00:25:01.001 06:51:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:01.001 06:51:05 -- target/shutdown.sh@28 -- # cat 00:25:01.001 06:51:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:01.001 06:51:05 -- target/shutdown.sh@28 -- # cat 00:25:01.001 06:51:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:01.001 06:51:05 -- target/shutdown.sh@28 -- # cat 00:25:01.001 06:51:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:01.001 06:51:05 -- target/shutdown.sh@28 -- # cat 00:25:01.001 06:51:05 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:01.001 06:51:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.001 06:51:05 -- common/autotest_common.sh@10 -- # set +x 00:25:01.001 Malloc1 00:25:01.001 [2024-04-17 06:51:05.536983] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:01.001 Malloc2 00:25:01.259 Malloc3 00:25:01.259 Malloc4 00:25:01.259 Malloc5 00:25:01.259 Malloc6 00:25:01.259 Malloc7 00:25:01.259 Malloc8 00:25:01.518 Malloc9 00:25:01.518 Malloc10 00:25:01.518 06:51:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.518 06:51:05 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:01.518 06:51:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:01.518 06:51:05 -- common/autotest_common.sh@10 -- # set +x 00:25:01.518 06:51:05 -- target/shutdown.sh@125 -- # perfpid=59038 00:25:01.518 06:51:05 -- target/shutdown.sh@126 -- # waitforlisten 59038 /var/tmp/bdevperf.sock 00:25:01.518 06:51:05 -- common/autotest_common.sh@817 -- # '[' -z 59038 ']' 00:25:01.518 06:51:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:01.518 06:51:05 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:01.518 06:51:05 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:01.518 06:51:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:01.518 06:51:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:01.518 06:51:05 -- nvmf/common.sh@521 -- # config=() 00:25:01.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:01.518 06:51:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:01.518 06:51:05 -- nvmf/common.sh@521 -- # local subsystem config 00:25:01.518 06:51:05 -- common/autotest_common.sh@10 -- # set +x 00:25:01.518 06:51:05 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:01.518 06:51:05 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:01.518 { 00:25:01.518 "params": { 00:25:01.518 "name": "Nvme$subsystem", 00:25:01.518 "trtype": "$TEST_TRANSPORT", 00:25:01.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.518 "adrfam": "ipv4", 00:25:01.518 "trsvcid": "$NVMF_PORT", 00:25:01.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.518 "hdgst": ${hdgst:-false}, 00:25:01.518 "ddgst": ${ddgst:-false} 00:25:01.518 }, 00:25:01.518 "method": "bdev_nvme_attach_controller" 00:25:01.518 } 00:25:01.518 EOF 00:25:01.518 )") 00:25:01.518 06:51:05 -- nvmf/common.sh@543 -- # cat 00:25:01.518 06:51:05 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:01.518 06:51:05 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:01.518 { 00:25:01.518 "params": { 00:25:01.518 "name": "Nvme$subsystem", 00:25:01.518 "trtype": "$TEST_TRANSPORT", 00:25:01.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.518 "adrfam": "ipv4", 00:25:01.518 "trsvcid": "$NVMF_PORT", 00:25:01.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.518 "hdgst": ${hdgst:-false}, 00:25:01.518 "ddgst": ${ddgst:-false} 00:25:01.518 }, 00:25:01.518 "method": "bdev_nvme_attach_controller" 00:25:01.518 } 00:25:01.518 EOF 00:25:01.518 )") 00:25:01.518 06:51:05 -- nvmf/common.sh@543 -- # cat 00:25:01.518 06:51:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:01.518 06:51:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:01.518 { 00:25:01.518 "params": { 00:25:01.518 "name": "Nvme$subsystem", 00:25:01.518 "trtype": "$TEST_TRANSPORT", 00:25:01.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.518 "adrfam": "ipv4", 00:25:01.518 "trsvcid": "$NVMF_PORT", 00:25:01.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.518 "hdgst": ${hdgst:-false}, 00:25:01.518 "ddgst": ${ddgst:-false} 00:25:01.518 }, 00:25:01.518 "method": "bdev_nvme_attach_controller" 00:25:01.518 } 00:25:01.518 EOF 00:25:01.518 )") 00:25:01.518 06:51:06 -- nvmf/common.sh@543 -- # cat 00:25:01.518 06:51:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:01.518 06:51:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:01.518 { 00:25:01.518 "params": { 00:25:01.518 "name": "Nvme$subsystem", 00:25:01.518 "trtype": "$TEST_TRANSPORT", 00:25:01.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.518 "adrfam": "ipv4", 00:25:01.518 "trsvcid": "$NVMF_PORT", 00:25:01.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.518 "hdgst": ${hdgst:-false}, 00:25:01.518 "ddgst": ${ddgst:-false} 00:25:01.518 }, 00:25:01.518 "method": "bdev_nvme_attach_controller" 00:25:01.518 } 00:25:01.518 EOF 00:25:01.518 )") 00:25:01.518 06:51:06 -- nvmf/common.sh@543 -- # cat 00:25:01.518 06:51:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:01.518 06:51:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:01.518 { 00:25:01.518 "params": { 00:25:01.518 "name": "Nvme$subsystem", 00:25:01.518 "trtype": "$TEST_TRANSPORT", 00:25:01.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.518 "adrfam": "ipv4", 00:25:01.518 "trsvcid": "$NVMF_PORT", 00:25:01.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.518 "hdgst": ${hdgst:-false}, 00:25:01.518 "ddgst": ${ddgst:-false} 00:25:01.518 }, 00:25:01.518 "method": "bdev_nvme_attach_controller" 00:25:01.518 } 00:25:01.518 EOF 00:25:01.518 )") 00:25:01.518 06:51:06 -- nvmf/common.sh@543 -- # cat 00:25:01.518 06:51:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:01.518 06:51:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:01.518 { 00:25:01.518 "params": { 00:25:01.518 "name": "Nvme$subsystem", 00:25:01.518 "trtype": "$TEST_TRANSPORT", 00:25:01.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.518 "adrfam": "ipv4", 00:25:01.518 "trsvcid": "$NVMF_PORT", 00:25:01.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.518 "hdgst": ${hdgst:-false}, 00:25:01.518 "ddgst": ${ddgst:-false} 00:25:01.518 }, 00:25:01.518 "method": "bdev_nvme_attach_controller" 00:25:01.519 } 00:25:01.519 EOF 00:25:01.519 )") 00:25:01.519 06:51:06 -- nvmf/common.sh@543 -- # cat 00:25:01.519 06:51:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:01.519 06:51:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:01.519 { 00:25:01.519 "params": { 00:25:01.519 "name": "Nvme$subsystem", 00:25:01.519 "trtype": "$TEST_TRANSPORT", 00:25:01.519 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.519 "adrfam": "ipv4", 00:25:01.519 "trsvcid": "$NVMF_PORT", 00:25:01.519 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.519 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.519 "hdgst": ${hdgst:-false}, 00:25:01.519 "ddgst": ${ddgst:-false} 00:25:01.519 }, 00:25:01.519 "method": "bdev_nvme_attach_controller" 00:25:01.519 } 00:25:01.519 EOF 00:25:01.519 )") 00:25:01.519 06:51:06 -- nvmf/common.sh@543 -- # cat 00:25:01.519 06:51:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:01.519 06:51:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:01.519 { 00:25:01.519 "params": { 00:25:01.519 "name": "Nvme$subsystem", 00:25:01.519 "trtype": "$TEST_TRANSPORT", 00:25:01.519 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.519 "adrfam": "ipv4", 00:25:01.519 "trsvcid": "$NVMF_PORT", 00:25:01.519 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.519 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.519 "hdgst": ${hdgst:-false}, 00:25:01.519 "ddgst": ${ddgst:-false} 00:25:01.519 }, 00:25:01.519 "method": "bdev_nvme_attach_controller" 00:25:01.519 } 00:25:01.519 EOF 00:25:01.519 )") 00:25:01.519 06:51:06 -- nvmf/common.sh@543 -- # cat 00:25:01.519 06:51:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:01.519 06:51:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:01.519 { 00:25:01.519 "params": { 00:25:01.519 "name": "Nvme$subsystem", 00:25:01.519 "trtype": "$TEST_TRANSPORT", 00:25:01.519 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.519 "adrfam": "ipv4", 00:25:01.519 "trsvcid": "$NVMF_PORT", 00:25:01.519 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.519 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.519 "hdgst": ${hdgst:-false}, 00:25:01.519 "ddgst": ${ddgst:-false} 00:25:01.519 }, 00:25:01.519 "method": "bdev_nvme_attach_controller" 00:25:01.519 } 00:25:01.519 EOF 00:25:01.519 )") 00:25:01.519 06:51:06 -- nvmf/common.sh@543 -- # cat 00:25:01.519 06:51:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:01.519 06:51:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:01.519 { 00:25:01.519 "params": { 00:25:01.519 "name": "Nvme$subsystem", 00:25:01.519 "trtype": "$TEST_TRANSPORT", 00:25:01.519 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.519 "adrfam": "ipv4", 00:25:01.519 "trsvcid": "$NVMF_PORT", 00:25:01.519 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.519 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.519 "hdgst": ${hdgst:-false}, 00:25:01.519 "ddgst": ${ddgst:-false} 00:25:01.519 }, 00:25:01.519 "method": "bdev_nvme_attach_controller" 00:25:01.519 } 00:25:01.519 EOF 00:25:01.519 )") 00:25:01.519 06:51:06 -- nvmf/common.sh@543 -- # cat 00:25:01.519 06:51:06 -- nvmf/common.sh@545 -- # jq . 00:25:01.519 06:51:06 -- nvmf/common.sh@546 -- # IFS=, 00:25:01.519 06:51:06 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:01.519 "params": { 00:25:01.519 "name": "Nvme1", 00:25:01.519 "trtype": "tcp", 00:25:01.519 "traddr": "10.0.0.2", 00:25:01.519 "adrfam": "ipv4", 00:25:01.519 "trsvcid": "4420", 00:25:01.519 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:01.519 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:01.519 "hdgst": false, 00:25:01.519 "ddgst": false 00:25:01.519 }, 00:25:01.519 "method": "bdev_nvme_attach_controller" 00:25:01.519 },{ 00:25:01.519 "params": { 00:25:01.519 "name": "Nvme2", 00:25:01.519 "trtype": "tcp", 00:25:01.519 "traddr": "10.0.0.2", 00:25:01.519 "adrfam": "ipv4", 00:25:01.519 "trsvcid": "4420", 00:25:01.519 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:01.519 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:01.519 "hdgst": false, 00:25:01.519 "ddgst": false 00:25:01.519 }, 00:25:01.519 "method": "bdev_nvme_attach_controller" 00:25:01.519 },{ 00:25:01.519 "params": { 00:25:01.519 "name": "Nvme3", 00:25:01.519 "trtype": "tcp", 00:25:01.519 "traddr": "10.0.0.2", 00:25:01.519 "adrfam": "ipv4", 00:25:01.519 "trsvcid": "4420", 00:25:01.519 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:01.519 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:01.519 "hdgst": false, 00:25:01.519 "ddgst": false 00:25:01.519 }, 00:25:01.519 "method": "bdev_nvme_attach_controller" 00:25:01.519 },{ 00:25:01.519 "params": { 00:25:01.519 "name": "Nvme4", 00:25:01.519 "trtype": "tcp", 00:25:01.519 "traddr": "10.0.0.2", 00:25:01.519 "adrfam": "ipv4", 00:25:01.519 "trsvcid": "4420", 00:25:01.519 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:01.519 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:01.519 "hdgst": false, 00:25:01.519 "ddgst": false 00:25:01.519 }, 00:25:01.519 "method": "bdev_nvme_attach_controller" 00:25:01.519 },{ 00:25:01.519 "params": { 00:25:01.519 "name": "Nvme5", 00:25:01.519 "trtype": "tcp", 00:25:01.519 "traddr": "10.0.0.2", 00:25:01.519 "adrfam": "ipv4", 00:25:01.519 "trsvcid": "4420", 00:25:01.519 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:01.519 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:01.519 "hdgst": false, 00:25:01.519 "ddgst": false 00:25:01.519 }, 00:25:01.519 "method": "bdev_nvme_attach_controller" 00:25:01.519 },{ 00:25:01.519 "params": { 00:25:01.519 "name": "Nvme6", 00:25:01.519 "trtype": "tcp", 00:25:01.519 "traddr": "10.0.0.2", 00:25:01.519 "adrfam": "ipv4", 00:25:01.519 "trsvcid": "4420", 00:25:01.519 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:01.519 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:01.519 "hdgst": false, 00:25:01.519 "ddgst": false 00:25:01.519 }, 00:25:01.519 "method": "bdev_nvme_attach_controller" 00:25:01.519 },{ 00:25:01.519 "params": { 00:25:01.519 "name": "Nvme7", 00:25:01.519 "trtype": "tcp", 00:25:01.519 "traddr": "10.0.0.2", 00:25:01.519 "adrfam": "ipv4", 00:25:01.519 "trsvcid": "4420", 00:25:01.519 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:01.519 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:01.519 "hdgst": false, 00:25:01.519 "ddgst": false 00:25:01.519 }, 00:25:01.519 "method": "bdev_nvme_attach_controller" 00:25:01.519 },{ 00:25:01.519 "params": { 00:25:01.519 "name": "Nvme8", 00:25:01.519 "trtype": "tcp", 00:25:01.519 "traddr": "10.0.0.2", 00:25:01.519 "adrfam": "ipv4", 00:25:01.519 "trsvcid": "4420", 00:25:01.519 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:01.519 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:01.519 "hdgst": false, 00:25:01.519 "ddgst": false 00:25:01.519 }, 00:25:01.519 "method": "bdev_nvme_attach_controller" 00:25:01.519 },{ 00:25:01.519 "params": { 00:25:01.519 "name": "Nvme9", 00:25:01.519 "trtype": "tcp", 00:25:01.519 "traddr": "10.0.0.2", 00:25:01.519 "adrfam": "ipv4", 00:25:01.519 "trsvcid": "4420", 00:25:01.519 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:01.519 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:01.519 "hdgst": false, 00:25:01.519 "ddgst": false 00:25:01.519 }, 00:25:01.519 "method": "bdev_nvme_attach_controller" 00:25:01.519 },{ 00:25:01.519 "params": { 00:25:01.519 "name": "Nvme10", 00:25:01.519 "trtype": "tcp", 00:25:01.519 "traddr": "10.0.0.2", 00:25:01.519 "adrfam": "ipv4", 00:25:01.519 "trsvcid": "4420", 00:25:01.519 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:01.519 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:01.519 "hdgst": false, 00:25:01.519 "ddgst": false 00:25:01.519 }, 00:25:01.519 "method": "bdev_nvme_attach_controller" 00:25:01.519 }' 00:25:01.519 [2024-04-17 06:51:06.035294] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:25:01.519 [2024-04-17 06:51:06.035373] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59038 ] 00:25:01.519 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.519 [2024-04-17 06:51:06.098787] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.777 [2024-04-17 06:51:06.183275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.675 Running I/O for 10 seconds... 00:25:03.675 06:51:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:03.675 06:51:08 -- common/autotest_common.sh@850 -- # return 0 00:25:03.675 06:51:08 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:03.675 06:51:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.675 06:51:08 -- common/autotest_common.sh@10 -- # set +x 00:25:03.675 06:51:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.675 06:51:08 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:03.675 06:51:08 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:03.675 06:51:08 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:03.675 06:51:08 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:03.675 06:51:08 -- target/shutdown.sh@57 -- # local ret=1 00:25:03.675 06:51:08 -- target/shutdown.sh@58 -- # local i 00:25:03.675 06:51:08 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:03.675 06:51:08 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:03.675 06:51:08 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:03.675 06:51:08 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:03.675 06:51:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.675 06:51:08 -- common/autotest_common.sh@10 -- # set +x 00:25:03.675 06:51:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.675 06:51:08 -- target/shutdown.sh@60 -- # read_io_count=3 00:25:03.675 06:51:08 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:25:03.675 06:51:08 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:03.934 06:51:08 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:03.934 06:51:08 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:03.934 06:51:08 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:03.934 06:51:08 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:03.934 06:51:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.934 06:51:08 -- common/autotest_common.sh@10 -- # set +x 00:25:03.934 06:51:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.934 06:51:08 -- target/shutdown.sh@60 -- # read_io_count=67 00:25:03.934 06:51:08 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:25:03.934 06:51:08 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:04.197 06:51:08 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:04.197 06:51:08 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:04.197 06:51:08 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:04.197 06:51:08 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:04.197 06:51:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.197 06:51:08 -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 06:51:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.197 06:51:08 -- target/shutdown.sh@60 -- # read_io_count=131 00:25:04.197 06:51:08 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:25:04.197 06:51:08 -- target/shutdown.sh@64 -- # ret=0 00:25:04.197 06:51:08 -- target/shutdown.sh@65 -- # break 00:25:04.197 06:51:08 -- target/shutdown.sh@69 -- # return 0 00:25:04.197 06:51:08 -- target/shutdown.sh@135 -- # killprocess 58858 00:25:04.197 06:51:08 -- common/autotest_common.sh@936 -- # '[' -z 58858 ']' 00:25:04.197 06:51:08 -- common/autotest_common.sh@940 -- # kill -0 58858 00:25:04.197 06:51:08 -- common/autotest_common.sh@941 -- # uname 00:25:04.197 06:51:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:04.197 06:51:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58858 00:25:04.197 06:51:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:04.197 06:51:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:04.197 06:51:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58858' 00:25:04.197 killing process with pid 58858 00:25:04.197 06:51:08 -- common/autotest_common.sh@955 -- # kill 58858 00:25:04.197 06:51:08 -- common/autotest_common.sh@960 -- # wait 58858 00:25:04.197 [2024-04-17 06:51:08.777399] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777510] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777524] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777547] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777575] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777588] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777606] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777620] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777632] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777644] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777656] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777668] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777680] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777695] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777708] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777721] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777746] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777761] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777774] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777789] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777801] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777816] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777830] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777844] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777858] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777870] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777884] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777896] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777908] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777924] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777937] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777950] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777980] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.777992] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.778004] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.778016] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.778031] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.778044] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.778057] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.778069] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.778081] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.778096] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.778109] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.197 [2024-04-17 06:51:08.778121] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.778134] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.778146] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.778159] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.778171] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.778192] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.778205] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.778217] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.778234] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.778251] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.778263] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.778275] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.778291] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.778303] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.778315] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.778327] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214fa30 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.779595] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.779628] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.779643] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.779655] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.779667] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.779680] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.779692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.779704] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.779716] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.779728] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.779740] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.779752] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.779764] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.779776] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.779788] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.779800] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.779812] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.779824] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.779836] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.779848] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.779860] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.779872] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.779884] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.779902] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.779915] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.779928] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.779940] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.779952] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.779964] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.779976] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.779988] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.780000] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.780013] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.780025] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.780037] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.780059] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.780071] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.780083] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.780095] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.780107] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.780119] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.780130] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.780142] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.780154] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.780183] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.780197] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.780209] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.780221] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.780232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.780245] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.780261] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.780273] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.780285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.780297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.780309] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.780321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.780333] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.780344] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.780356] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.780368] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.780380] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.780391] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbb60 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.781505] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.781528] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.781542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.781554] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.781566] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.781578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.198 [2024-04-17 06:51:08.781590] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781602] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781614] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781626] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781638] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781650] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781662] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781674] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781686] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781703] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781716] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781728] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781740] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781752] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781776] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781788] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781800] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781812] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781824] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781836] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781848] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781861] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781874] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781886] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781898] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781909] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781921] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781934] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781952] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781963] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781975] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.781998] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.782011] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.782023] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.782038] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.782051] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.782063] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.782075] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.782087] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.782098] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.782110] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.782122] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.782134] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.782146] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.782158] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.782186] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.782201] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.782214] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.782226] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.782238] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.782250] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.782262] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.782274] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.782287] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.782298] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9810 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.784897] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.784926] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.784940] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.784952] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.784963] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.784975] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.784992] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.785005] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.785018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.785029] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.785041] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.785053] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.785065] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.785077] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.785088] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.785101] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.785112] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.785124] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.785136] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.785149] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.785162] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.785174] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.785199] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.785212] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.785234] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.785246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.199 [2024-04-17 06:51:08.785258] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785270] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785282] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785294] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785306] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785318] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785330] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785345] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785358] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785370] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785382] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785395] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785406] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785418] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785430] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785442] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785454] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785466] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785490] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785514] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785526] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785539] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785550] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785574] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785586] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785598] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785610] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785622] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785647] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785658] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785671] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785686] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.785699] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa5c0 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.786683] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfaa50 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.786708] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfaa50 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.786720] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfaa50 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.786732] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfaa50 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.786744] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfaa50 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.786756] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfaa50 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788135] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788168] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788191] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788205] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788217] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788235] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788247] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788259] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788295] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788306] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788318] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788330] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788341] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788353] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788365] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788377] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788388] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788406] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788419] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788431] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788443] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788455] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788477] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788489] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788501] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788513] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788525] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788537] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788549] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788561] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788573] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788584] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788597] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788610] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788623] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788647] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788659] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788671] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.200 [2024-04-17 06:51:08.788683] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.788695] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.788708] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.788720] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.788732] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.788748] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.788761] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.788773] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.788785] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.788798] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.788810] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.788822] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.788834] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.788846] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.788859] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.788871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.788884] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.788896] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.788908] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.788921] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.788933] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.788945] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb240 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.789657] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.789683] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.789696] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.789709] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.789722] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.789733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.789746] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.789759] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.789772] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.789784] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.789802] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.789815] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.789827] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.789840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.789852] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.789866] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.789878] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.789891] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.789904] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.789918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.789930] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.789943] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.789955] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.789968] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.789980] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.789992] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.790004] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.790017] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.790030] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.790043] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.790055] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.790066] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.790078] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.790090] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.790113] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.790125] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.790137] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.790153] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.790166] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.790189] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.790203] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.790215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.790226] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.790239] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.790250] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.201 [2024-04-17 06:51:08.790262] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.202 [2024-04-17 06:51:08.790273] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.202 [2024-04-17 06:51:08.790285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.202 [2024-04-17 06:51:08.790297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.202 [2024-04-17 06:51:08.790309] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.202 [2024-04-17 06:51:08.790321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.202 [2024-04-17 06:51:08.790333] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.202 [2024-04-17 06:51:08.790345] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.202 [2024-04-17 06:51:08.790357] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.202 [2024-04-17 06:51:08.790369] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.202 [2024-04-17 06:51:08.790381] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.202 [2024-04-17 06:51:08.790392] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.202 [2024-04-17 06:51:08.790404] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.202 [2024-04-17 06:51:08.790416] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.202 [2024-04-17 06:51:08.790428] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.202 [2024-04-17 06:51:08.790440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.202 [2024-04-17 06:51:08.790452] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.202 [2024-04-17 06:51:08.790464] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb6d0 is same with the state(5) to be set 00:25:04.202 [2024-04-17 06:51:08.793544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.202 [2024-04-17 06:51:08.793592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.202 [2024-04-17 06:51:08.793612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.202 [2024-04-17 06:51:08.793627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.202 [2024-04-17 06:51:08.793641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.202 [2024-04-17 06:51:08.793655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.202 [2024-04-17 06:51:08.793668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.202 [2024-04-17 06:51:08.793683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.202 [2024-04-17 06:51:08.793696] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa37f60 is same with the state(5) to be set 00:25:04.202 [2024-04-17 06:51:08.793751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.202 [2024-04-17 06:51:08.793771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.202 [2024-04-17 06:51:08.793785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.202 [2024-04-17 06:51:08.793798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.202 [2024-04-17 06:51:08.793814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.202 [2024-04-17 06:51:08.793826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.202 [2024-04-17 06:51:08.793840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.202 [2024-04-17 06:51:08.793852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.202 [2024-04-17 06:51:08.793865] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02840 is same with the state(5) to be set 00:25:04.202 [2024-04-17 06:51:08.793912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.202 [2024-04-17 06:51:08.793932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.202 [2024-04-17 06:51:08.793946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.202 [2024-04-17 06:51:08.793960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.202 [2024-04-17 06:51:08.793974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.202 [2024-04-17 06:51:08.793987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.202 [2024-04-17 06:51:08.794001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.202 [2024-04-17 06:51:08.794014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.202 [2024-04-17 06:51:08.794027] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3acd0 is same with the state(5) to be set 00:25:04.202 [2024-04-17 06:51:08.794087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.202 [2024-04-17 06:51:08.794113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.202 [2024-04-17 06:51:08.794138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.202 [2024-04-17 06:51:08.794162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.202 [2024-04-17 06:51:08.794195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.202 [2024-04-17 06:51:08.794212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.202 [2024-04-17 06:51:08.794232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.202 [2024-04-17 06:51:08.794245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.202 [2024-04-17 06:51:08.794259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb79900 is same with the state(5) to be set 00:25:04.202 [2024-04-17 06:51:08.794308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.202 [2024-04-17 06:51:08.794329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.202 [2024-04-17 06:51:08.794343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.202 [2024-04-17 06:51:08.794356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.202 [2024-04-17 06:51:08.794370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.202 [2024-04-17 06:51:08.794383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.202 [2024-04-17 06:51:08.794397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.202 [2024-04-17 06:51:08.794409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.202 [2024-04-17 06:51:08.794422] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf6be0 is same with the state(5) to be set 00:25:04.202 [2024-04-17 06:51:08.794481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.202 [2024-04-17 06:51:08.794501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.202 [2024-04-17 06:51:08.794515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.202 [2024-04-17 06:51:08.794533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.202 [2024-04-17 06:51:08.794547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.202 [2024-04-17 06:51:08.794560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.202 [2024-04-17 06:51:08.794573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.202 [2024-04-17 06:51:08.794591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.202 [2024-04-17 06:51:08.794604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa670e0 is same with the state(5) to be set 00:25:04.202 [2024-04-17 06:51:08.794652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.202 [2024-04-17 06:51:08.794672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.202 [2024-04-17 06:51:08.794686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.202 [2024-04-17 06:51:08.794699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.202 [2024-04-17 06:51:08.794712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.202 [2024-04-17 06:51:08.794725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.202 [2024-04-17 06:51:08.794739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.202 [2024-04-17 06:51:08.794752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.202 [2024-04-17 06:51:08.794765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5b840 is same with the state(5) to be set 00:25:04.202 [2024-04-17 06:51:08.794812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.203 [2024-04-17 06:51:08.794832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.794847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.203 [2024-04-17 06:51:08.794860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.794874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.203 [2024-04-17 06:51:08.794887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.794901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.203 [2024-04-17 06:51:08.794914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.794927] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67490 is same with the state(5) to be set 00:25:04.203 [2024-04-17 06:51:08.794972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.203 [2024-04-17 06:51:08.794992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.795016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.203 [2024-04-17 06:51:08.795029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.795044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.203 [2024-04-17 06:51:08.795056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.795074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.203 [2024-04-17 06:51:08.795088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.795100] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c850 is same with the state(5) to be set 00:25:04.203 [2024-04-17 06:51:08.795145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.203 [2024-04-17 06:51:08.795164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.795193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.203 [2024-04-17 06:51:08.795221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.795242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.203 [2024-04-17 06:51:08.795255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.795269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.203 [2024-04-17 06:51:08.795282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.795294] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae3fa0 is same with the state(5) to be set 00:25:04.203 [2024-04-17 06:51:08.796271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.203 [2024-04-17 06:51:08.796297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.796322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.203 [2024-04-17 06:51:08.796337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.796353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.203 [2024-04-17 06:51:08.796366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.796381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.203 [2024-04-17 06:51:08.796394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.796409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.203 [2024-04-17 06:51:08.796422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.796437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.203 [2024-04-17 06:51:08.796450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.796465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.203 [2024-04-17 06:51:08.796490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.796507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.203 [2024-04-17 06:51:08.796521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.796558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.203 [2024-04-17 06:51:08.796581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.796598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.203 [2024-04-17 06:51:08.796611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.796626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.203 [2024-04-17 06:51:08.796639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.796654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.203 [2024-04-17 06:51:08.796667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.796683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.203 [2024-04-17 06:51:08.796696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.796711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.203 [2024-04-17 06:51:08.796725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.796739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.203 [2024-04-17 06:51:08.796752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.796767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.203 [2024-04-17 06:51:08.796780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.796795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.203 [2024-04-17 06:51:08.796808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.796824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.203 [2024-04-17 06:51:08.796837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.796852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.203 [2024-04-17 06:51:08.796865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.796885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.203 [2024-04-17 06:51:08.796899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.796914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.203 [2024-04-17 06:51:08.796928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.796943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.203 [2024-04-17 06:51:08.796956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.796971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.203 [2024-04-17 06:51:08.796984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.796999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.203 [2024-04-17 06:51:08.797012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.797027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.203 [2024-04-17 06:51:08.797040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.797055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.203 [2024-04-17 06:51:08.797068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.203 [2024-04-17 06:51:08.797083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.203 [2024-04-17 06:51:08.797096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.797111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.797128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.797143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.797156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.797170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.797282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.797302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.797316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.797332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.797351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.797382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.797403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.797419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.797435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.797450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.797464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.797478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.797508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.797537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.797554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.797575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.797597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.797625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.797649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.797664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.797677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.797692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.797705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.797720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.797733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.797749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.797762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.797777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.797790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.797809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.797823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.797838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.797851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.797865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.797878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.797893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.797906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.797921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.797933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.797949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.797962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.797977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.797990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.798005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.798018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.798034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.798047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.798061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.798074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.798088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.798102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.798117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.798130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.798144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.798161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.798206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.798226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.798241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.798255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.798269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.798282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.798297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.798310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.798325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.798337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.798352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.798365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.798379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.798392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.798435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.204 [2024-04-17 06:51:08.798524] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xac23a0 was disconnected and freed. reset controller. 00:25:04.204 [2024-04-17 06:51:08.798779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.204 [2024-04-17 06:51:08.798802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.204 [2024-04-17 06:51:08.798822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.205 [2024-04-17 06:51:08.798837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.205 [2024-04-17 06:51:08.798852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.205 [2024-04-17 06:51:08.798865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.205 [2024-04-17 06:51:08.798881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.205 [2024-04-17 06:51:08.798894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.205 [2024-04-17 06:51:08.798915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.205 [2024-04-17 06:51:08.798929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.205 [2024-04-17 06:51:08.798944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.205 [2024-04-17 06:51:08.798957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.205 [2024-04-17 06:51:08.798972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.205 [2024-04-17 06:51:08.798985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.205 [2024-04-17 06:51:08.799000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.205 [2024-04-17 06:51:08.799013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.205 [2024-04-17 06:51:08.799027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.205 [2024-04-17 06:51:08.799043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.205 [2024-04-17 06:51:08.799073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.205 [2024-04-17 06:51:08.799094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.205 [2024-04-17 06:51:08.799111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.205 [2024-04-17 06:51:08.799124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.205 [2024-04-17 06:51:08.799139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.205 [2024-04-17 06:51:08.799153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.205 [2024-04-17 06:51:08.799187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.205 [2024-04-17 06:51:08.799214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.205 [2024-04-17 06:51:08.799243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.205 [2024-04-17 06:51:08.799271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.205 [2024-04-17 06:51:08.799292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.205 [2024-04-17 06:51:08.799306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.205 [2024-04-17 06:51:08.799326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.205 [2024-04-17 06:51:08.799353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.205 [2024-04-17 06:51:08.799375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.205 [2024-04-17 06:51:08.799393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.205 [2024-04-17 06:51:08.799409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.205 [2024-04-17 06:51:08.799423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.205 [2024-04-17 06:51:08.799437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.205 [2024-04-17 06:51:08.799450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.205 [2024-04-17 06:51:08.799465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.205 [2024-04-17 06:51:08.799478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.205 [2024-04-17 06:51:08.799494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.205 [2024-04-17 06:51:08.799507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.205 [2024-04-17 06:51:08.799522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.205 [2024-04-17 06:51:08.799535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.205 [2024-04-17 06:51:08.799550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.205 [2024-04-17 06:51:08.799563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.205 [2024-04-17 06:51:08.799578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.205 [2024-04-17 06:51:08.799591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.205 [2024-04-17 06:51:08.799606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.799619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.799635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.799648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.799663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.799676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.799691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.799704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.799720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.799733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.799752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.799766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.799781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.799794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.799809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.799822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.799837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.799850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.799865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.799878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.799893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.799906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.799920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.799933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.799948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.799961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.799975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.799988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.800003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.800015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.800030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.800042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.800057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.800069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.800085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.800101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.800116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.800129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.800143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.800156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.800188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.800205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.800221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.800234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.800249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.800263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.800278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.800301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.800334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.800362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.800390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.800405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.800421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.800434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.800449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.800463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.800478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.800495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.800513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.800541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.800565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.800587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.800603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.800617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.800632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.800646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.800661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.800674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.800689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.800702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.800717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.800731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.800746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.800759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.800774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.800788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-04-17 06:51:08.800803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-04-17 06:51:08.800817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.800832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.800845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.800885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.476 [2024-04-17 06:51:08.800957] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xac3890 was disconnected and freed. reset controller. 00:25:04.476 [2024-04-17 06:51:08.804291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.804329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.804356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.804371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.804393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.804407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.804422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.804436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.804451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.804464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.804479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.804494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.804509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.804522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.804537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.804559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.804574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.804587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.804602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.804615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.804630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.804644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.804659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.804672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.804688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.804701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.804717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.804730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.804746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.804764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.804779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.804793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.804808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.804821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.804836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.804849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.804864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.804878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.804893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.804906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.804921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.804934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.804949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.804963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.804977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.804991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.805006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.805019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.805034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.805047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.805063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.805076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.805092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.805105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.805124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.805137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.805152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.805166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.805192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.805208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.805224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.805237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.805252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.805265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.805279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.805293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.805308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.805321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.805336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.805350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.805365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.805378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.805393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-04-17 06:51:08.805406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-04-17 06:51:08.805421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-04-17 06:51:08.805434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-04-17 06:51:08.805449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-04-17 06:51:08.805462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-04-17 06:51:08.805477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-04-17 06:51:08.805500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-04-17 06:51:08.805516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-04-17 06:51:08.805529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-04-17 06:51:08.805544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-04-17 06:51:08.805557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-04-17 06:51:08.805572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-04-17 06:51:08.805585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-04-17 06:51:08.805600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-04-17 06:51:08.805613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-04-17 06:51:08.805629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-04-17 06:51:08.805642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-04-17 06:51:08.805658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-04-17 06:51:08.805671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-04-17 06:51:08.805686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-04-17 06:51:08.805699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-04-17 06:51:08.805714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-04-17 06:51:08.805727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-04-17 06:51:08.805741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-04-17 06:51:08.805754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-04-17 06:51:08.805770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-04-17 06:51:08.805783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-04-17 06:51:08.805798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-04-17 06:51:08.805811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-04-17 06:51:08.805826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-04-17 06:51:08.805839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-04-17 06:51:08.805857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-04-17 06:51:08.805871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-04-17 06:51:08.805886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-04-17 06:51:08.805899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-04-17 06:51:08.805914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-04-17 06:51:08.805927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-04-17 06:51:08.805942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-04-17 06:51:08.805955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-04-17 06:51:08.805970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-04-17 06:51:08.805983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-04-17 06:51:08.805999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-04-17 06:51:08.806012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-04-17 06:51:08.806027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-04-17 06:51:08.806040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-04-17 06:51:08.806055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-04-17 06:51:08.806068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-04-17 06:51:08.806083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-04-17 06:51:08.806097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-04-17 06:51:08.806112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-04-17 06:51:08.806125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-04-17 06:51:08.806140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-04-17 06:51:08.806153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-04-17 06:51:08.806181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-04-17 06:51:08.806197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-04-17 06:51:08.806305] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb71990 was disconnected and freed. reset controller. 00:25:04.477 [2024-04-17 06:51:08.806591] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:04.477 [2024-04-17 06:51:08.806621] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:04.477 [2024-04-17 06:51:08.806649] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa670e0 (9): Bad file descriptor 00:25:04.477 [2024-04-17 06:51:08.806671] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3acd0 (9): Bad file descriptor 00:25:04.477 [2024-04-17 06:51:08.806693] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa37f60 (9): Bad file descriptor 00:25:04.477 [2024-04-17 06:51:08.806723] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc02840 (9): Bad file descriptor 00:25:04.477 [2024-04-17 06:51:08.806754] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb79900 (9): Bad file descriptor 00:25:04.477 [2024-04-17 06:51:08.806778] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf6be0 (9): Bad file descriptor 00:25:04.477 [2024-04-17 06:51:08.806808] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5b840 (9): Bad file descriptor 00:25:04.477 [2024-04-17 06:51:08.806839] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa67490 (9): Bad file descriptor 00:25:04.477 [2024-04-17 06:51:08.806868] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa7c850 (9): Bad file descriptor 00:25:04.477 [2024-04-17 06:51:08.806896] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae3fa0 (9): Bad file descriptor 00:25:04.477 [2024-04-17 06:51:08.810774] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa31870 was disconnected and freed. reset controller. 00:25:04.477 [2024-04-17 06:51:08.810863] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:04.477 [2024-04-17 06:51:08.810940] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:04.477 [2024-04-17 06:51:08.811003] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:04.477 [2024-04-17 06:51:08.811032] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:04.477 [2024-04-17 06:51:08.811230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.477 [2024-04-17 06:51:08.811385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.477 [2024-04-17 06:51:08.811411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3acd0 with addr=10.0.0.2, port=4420 00:25:04.477 [2024-04-17 06:51:08.811427] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3acd0 is same with the state(5) to be set 00:25:04.477 [2024-04-17 06:51:08.811670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.477 [2024-04-17 06:51:08.811796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.477 [2024-04-17 06:51:08.811820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa670e0 with addr=10.0.0.2, port=4420 00:25:04.477 [2024-04-17 06:51:08.811835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa670e0 is same with the state(5) to be set 00:25:04.477 [2024-04-17 06:51:08.811913] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:04.477 [2024-04-17 06:51:08.811982] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:04.477 [2024-04-17 06:51:08.812075] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:04.478 [2024-04-17 06:51:08.812678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.812704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.812738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.812754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.812770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.812784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.812799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.812812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.812826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.812839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.812854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.812867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.812882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.812895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.812910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.812923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.812938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.812952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.812966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.812979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.812994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.813007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.813022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.813035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.813050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.813063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.813078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.813091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.813110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.813124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.813139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.813153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.813192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.813209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.813224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.813237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.813252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.813265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.813280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.813293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.813309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.813322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.813337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.813350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.813365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.813378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.813392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.813406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.813421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.813434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.813449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.813462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.813486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.813503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.813519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.813532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.813550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.813564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.813579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.813592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.813607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.813621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.813636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.813649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.813665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.813678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.813693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.813706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.813721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.813734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.813749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.813762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.813777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.813790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.813805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.813818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.813834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-04-17 06:51:08.813847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-04-17 06:51:08.813865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-04-17 06:51:08.813879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-04-17 06:51:08.813895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-04-17 06:51:08.813908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-04-17 06:51:08.813923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-04-17 06:51:08.813936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-04-17 06:51:08.813951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-04-17 06:51:08.813964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-04-17 06:51:08.813979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-04-17 06:51:08.813992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-04-17 06:51:08.814010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-04-17 06:51:08.814036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-04-17 06:51:08.814065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-04-17 06:51:08.814090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-04-17 06:51:08.814118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-04-17 06:51:08.814133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-04-17 06:51:08.814148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-04-17 06:51:08.814172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-04-17 06:51:08.814197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-04-17 06:51:08.814211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-04-17 06:51:08.814226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-04-17 06:51:08.814239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-04-17 06:51:08.814255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-04-17 06:51:08.814268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-04-17 06:51:08.814283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-04-17 06:51:08.814301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-04-17 06:51:08.814317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-04-17 06:51:08.814330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-04-17 06:51:08.814345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-04-17 06:51:08.814358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-04-17 06:51:08.814373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-04-17 06:51:08.814386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-04-17 06:51:08.814401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-04-17 06:51:08.814414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-04-17 06:51:08.814429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-04-17 06:51:08.814442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-04-17 06:51:08.814457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-04-17 06:51:08.814472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-04-17 06:51:08.814486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-04-17 06:51:08.814500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-04-17 06:51:08.814514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-04-17 06:51:08.814537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-04-17 06:51:08.814552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-04-17 06:51:08.814565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-04-17 06:51:08.814580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-04-17 06:51:08.814594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-04-17 06:51:08.814609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-04-17 06:51:08.814622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-04-17 06:51:08.814637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-04-17 06:51:08.814650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-04-17 06:51:08.814668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72b00 is same with the state(5) to be set 00:25:04.479 [2024-04-17 06:51:08.815359] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb72b00 was disconnected and freed. reset controller. 00:25:04.479 [2024-04-17 06:51:08.815391] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:04.479 [2024-04-17 06:51:08.815563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.479 [2024-04-17 06:51:08.815691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.479 [2024-04-17 06:51:08.815715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb79900 with addr=10.0.0.2, port=4420 00:25:04.479 [2024-04-17 06:51:08.815730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb79900 is same with the state(5) to be set 00:25:04.479 [2024-04-17 06:51:08.815753] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3acd0 (9): Bad file descriptor 00:25:04.479 [2024-04-17 06:51:08.815772] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa670e0 (9): Bad file descriptor 00:25:04.479 [2024-04-17 06:51:08.817050] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:04.479 [2024-04-17 06:51:08.817210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.479 [2024-04-17 06:51:08.817337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.479 [2024-04-17 06:51:08.817362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa7c850 with addr=10.0.0.2, port=4420 00:25:04.479 [2024-04-17 06:51:08.817378] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c850 is same with the state(5) to be set 00:25:04.479 [2024-04-17 06:51:08.817396] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb79900 (9): Bad file descriptor 00:25:04.479 [2024-04-17 06:51:08.817414] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:04.480 [2024-04-17 06:51:08.817426] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:04.480 [2024-04-17 06:51:08.817441] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:04.480 [2024-04-17 06:51:08.817461] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:04.480 [2024-04-17 06:51:08.817477] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:04.480 [2024-04-17 06:51:08.817489] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:04.480 [2024-04-17 06:51:08.817614] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:04.480 [2024-04-17 06:51:08.817636] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:04.480 [2024-04-17 06:51:08.817783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.480 [2024-04-17 06:51:08.817933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.480 [2024-04-17 06:51:08.817958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5b840 with addr=10.0.0.2, port=4420 00:25:04.480 [2024-04-17 06:51:08.817974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5b840 is same with the state(5) to be set 00:25:04.480 [2024-04-17 06:51:08.817992] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa7c850 (9): Bad file descriptor 00:25:04.480 [2024-04-17 06:51:08.818008] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:04.480 [2024-04-17 06:51:08.818020] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:04.480 [2024-04-17 06:51:08.818038] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:04.480 [2024-04-17 06:51:08.818098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.818119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.818139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.818153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.818172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.818194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.818210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.818223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.818238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.818252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.818267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.818280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.818295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.818308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.818323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.818336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.818351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.818364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.818379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.818392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.818406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.818419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.818434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.818447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.818472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.818490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.818506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.818519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.818537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.818550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.818565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.818578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.818593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.818606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.818620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.818633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.818648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.818661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.818675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.818688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.818703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.818717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.818731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.818744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.818759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.818772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.818787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.818800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.818814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.818827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.818846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.818860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.818874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.818887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.818902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.818915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.818931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.818944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.818958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.818971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.818988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.819002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.819017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.819031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.480 [2024-04-17 06:51:08.819046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.480 [2024-04-17 06:51:08.819059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.819087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.819115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.819143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.819172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.819212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.819241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.819270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.819298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.819327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.819355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.819383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.819411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.819439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.819468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.819500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.819529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.819558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.819590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.819619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.819647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.819674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.819703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.819731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.819759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.819787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.819815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.819843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.819871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.819899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.819933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.819963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.819977] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabfd10 is same with the state(5) to be set 00:25:04.481 [2024-04-17 06:51:08.821248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.821270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.821290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.821305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.821320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.821333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.821348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.821362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.821377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.821390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.821405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.821419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.821434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.821448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.481 [2024-04-17 06:51:08.821463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.481 [2024-04-17 06:51:08.821476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.821491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.821504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.821519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.821540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.821555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.821573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.821588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.821602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.821617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.821630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.821646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.821660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.821675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.821689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.821705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.821718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.821733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.821746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.821761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.821775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.821789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.821803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.821817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.821830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.821846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.821859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.821873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.821887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.821902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.821915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.821934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.821947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.821962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.821976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.821990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.822004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.822018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.822031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.822046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.822059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.822074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.822087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.822102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.822115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.822131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.822144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.822169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.822192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.822208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.822223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.822238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.822251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.822267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.822280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.822295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.822313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.822328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.822342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.822357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.822371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.822385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.822399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.822414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.822427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.822441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.822455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.822476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.822489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.822504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.822517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.822532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.822545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.822560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.822573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.822589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.822602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.822618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.822631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.482 [2024-04-17 06:51:08.822646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.482 [2024-04-17 06:51:08.822660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.822678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.822692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.822707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.822720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.822735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.822749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.822763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.822777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.822791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.822804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.822819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.822832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.822847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.822860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.822876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.822889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.822905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.822918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.822932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.822945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.822960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.822974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.822989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.823003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.823017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.823034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.823051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.823064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.823079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.823093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.823108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.823121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.823135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac0f80 is same with the state(5) to be set 00:25:04.483 [2024-04-17 06:51:08.824386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.824409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.824428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.824443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.824470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.824483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.824498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.824512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.824534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.824547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.824562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.824575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.824590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.824604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.824619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.824632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.824647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.824665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.824681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.824694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.824709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.824722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.824737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.824751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.824766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.824779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.824794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.824808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.824823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.824836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.824851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.824864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.824879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.824892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.824907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.824920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.824935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.824947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.824962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.824976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.824991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-04-17 06:51:08.825004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.483 [2024-04-17 06:51:08.825022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.825974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.825989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.826001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.484 [2024-04-17 06:51:08.826016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-04-17 06:51:08.826029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.826044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.826057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.826072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.826085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.826100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.826113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.826131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.826145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.826169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.826188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.826204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.826218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.826232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.826246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.826260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.826273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.826287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffab0 is same with the state(5) to be set 00:25:04.485 [2024-04-17 06:51:08.827539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.827561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.827581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.827595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.827610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.827623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.827638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.827652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.827667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.827680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.827695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.827708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.827723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.827737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.827757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.827771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.827786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.827799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.827814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.827827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.827842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.827855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.827870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.827884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.827899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.827913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.827927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.827940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.827955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.827968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.827983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.827996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.828011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.828024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.828039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.828052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.828067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.828080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.828095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.828112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.828128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.828141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.828156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.828172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.828196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.828209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.828224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.828237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.828252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.828265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.828281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.828294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.828309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.828322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.828336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.828350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.828365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.828378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.828393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.828406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.485 [2024-04-17 06:51:08.828421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.485 [2024-04-17 06:51:08.828434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.828449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.828462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.828491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.828505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.828519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.828532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.828548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.828561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.828576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.828589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.828604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.828617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.828632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.828645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.828660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.828673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.828688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.828701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.828716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.828729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.828744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.828758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.828773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.828786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.828802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.828815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.828830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.828846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.828862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.828875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.828890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.828903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.828918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.828931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.828946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.828959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.828974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.828987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.829002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.829015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.829030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.829043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.829058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.829071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.829086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.829099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.829113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.829126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.829141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.829154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.829173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.829196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.829218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.829232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.829247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.829260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.829275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.829288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.829303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.829316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.829332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.829345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.829360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.829373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.829388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.829401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.829414] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa32a10 is same with the state(5) to be set 00:25:04.486 [2024-04-17 06:51:08.830660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.830682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.830702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.830717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.830732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.830745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.830760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.830773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.830788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.830801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.486 [2024-04-17 06:51:08.830821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.486 [2024-04-17 06:51:08.830835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.830850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.830863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.830879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.830891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.830906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.830920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.830935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.830948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.830963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.830976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.830991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.487 [2024-04-17 06:51:08.831971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.487 [2024-04-17 06:51:08.831986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.488 [2024-04-17 06:51:08.831999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.488 [2024-04-17 06:51:08.832014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.488 [2024-04-17 06:51:08.832027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.488 [2024-04-17 06:51:08.832042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.488 [2024-04-17 06:51:08.832055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.488 [2024-04-17 06:51:08.832070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.488 [2024-04-17 06:51:08.832083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.488 [2024-04-17 06:51:08.832098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.488 [2024-04-17 06:51:08.832111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.488 [2024-04-17 06:51:08.832126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.488 [2024-04-17 06:51:08.832139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.488 [2024-04-17 06:51:08.832155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.488 [2024-04-17 06:51:08.832171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.488 [2024-04-17 06:51:08.832200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.488 [2024-04-17 06:51:08.832215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.488 [2024-04-17 06:51:08.832230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.488 [2024-04-17 06:51:08.832243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.488 [2024-04-17 06:51:08.832257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.488 [2024-04-17 06:51:08.832270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.488 [2024-04-17 06:51:08.832290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.488 [2024-04-17 06:51:08.832304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.488 [2024-04-17 06:51:08.832319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.488 [2024-04-17 06:51:08.832332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.488 [2024-04-17 06:51:08.832347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.488 [2024-04-17 06:51:08.832360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.488 [2024-04-17 06:51:08.832375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.488 [2024-04-17 06:51:08.832389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.488 [2024-04-17 06:51:08.832403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.488 [2024-04-17 06:51:08.832416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.488 [2024-04-17 06:51:08.832431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.488 [2024-04-17 06:51:08.832444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.488 [2024-04-17 06:51:08.832459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.488 [2024-04-17 06:51:08.832472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.488 [2024-04-17 06:51:08.832487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.488 [2024-04-17 06:51:08.832500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.488 [2024-04-17 06:51:08.832515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.488 [2024-04-17 06:51:08.832537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.488 [2024-04-17 06:51:08.832551] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa33d90 is same with the state(5) to be set 00:25:04.488 [2024-04-17 06:51:08.834902] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:04.488 [2024-04-17 06:51:08.834933] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:04.488 [2024-04-17 06:51:08.834959] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:04.488 [2024-04-17 06:51:08.834976] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:04.488 [2024-04-17 06:51:08.835035] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5b840 (9): Bad file descriptor 00:25:04.488 [2024-04-17 06:51:08.835058] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:04.488 [2024-04-17 06:51:08.835072] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:04.488 [2024-04-17 06:51:08.835088] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:04.488 [2024-04-17 06:51:08.835186] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:04.488 [2024-04-17 06:51:08.835217] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:04.488 [2024-04-17 06:51:08.835238] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:04.488 [2024-04-17 06:51:08.835256] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:04.488 [2024-04-17 06:51:08.835343] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:04.488 task offset: 31232 on job bdev=Nvme3n1 fails 00:25:04.488 00:25:04.488 Latency(us) 00:25:04.488 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.488 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:04.488 Job: Nvme1n1 ended in about 0.92 seconds with error 00:25:04.488 Verification LBA range: start 0x0 length 0x400 00:25:04.488 Nvme1n1 : 0.92 139.68 8.73 69.84 0.00 302064.32 19223.89 259425.47 00:25:04.488 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:04.488 Job: Nvme2n1 ended in about 0.92 seconds with error 00:25:04.488 Verification LBA range: start 0x0 length 0x400 00:25:04.488 Nvme2n1 : 0.92 139.20 8.70 69.60 0.00 296858.86 28156.21 259425.47 00:25:04.488 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:04.488 Job: Nvme3n1 ended in about 0.90 seconds with error 00:25:04.488 Verification LBA range: start 0x0 length 0x400 00:25:04.488 Nvme3n1 : 0.90 213.71 13.36 71.24 0.00 212637.77 6068.15 267192.70 00:25:04.488 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:04.488 Job: Nvme4n1 ended in about 0.90 seconds with error 00:25:04.488 Verification LBA range: start 0x0 length 0x400 00:25:04.488 Nvme4n1 : 0.90 213.45 13.34 71.15 0.00 208322.42 4660.34 257872.02 00:25:04.488 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:04.488 Job: Nvme5n1 ended in about 0.92 seconds with error 00:25:04.488 Verification LBA range: start 0x0 length 0x400 00:25:04.488 Nvme5n1 : 0.92 138.73 8.67 69.36 0.00 279394.99 19418.07 264085.81 00:25:04.488 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:04.488 Job: Nvme6n1 ended in about 0.90 seconds with error 00:25:04.488 Verification LBA range: start 0x0 length 0x400 00:25:04.488 Nvme6n1 : 0.90 212.49 13.28 70.83 0.00 200108.18 14175.19 254765.13 00:25:04.488 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:04.488 Verification LBA range: start 0x0 length 0x400 00:25:04.489 Nvme7n1 : 0.90 212.15 13.26 0.00 0.00 261227.71 25826.04 279620.27 00:25:04.489 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:04.489 Job: Nvme8n1 ended in about 0.93 seconds with error 00:25:04.489 Verification LBA range: start 0x0 length 0x400 00:25:04.489 Nvme8n1 : 0.93 138.26 8.64 69.13 0.00 262173.84 19320.98 243891.01 00:25:04.489 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:04.489 Job: Nvme9n1 ended in about 0.93 seconds with error 00:25:04.489 Verification LBA range: start 0x0 length 0x400 00:25:04.489 Nvme9n1 : 0.93 137.80 8.61 68.90 0.00 257456.86 21068.61 270299.59 00:25:04.489 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:04.489 Job: Nvme10n1 ended in about 0.91 seconds with error 00:25:04.489 Verification LBA range: start 0x0 length 0x400 00:25:04.489 Nvme10n1 : 0.91 140.30 8.77 70.15 0.00 246146.84 19709.35 295154.73 00:25:04.489 =================================================================================================================== 00:25:04.489 Total : 1685.78 105.36 630.20 0.00 248492.24 4660.34 295154.73 00:25:04.489 [2024-04-17 06:51:08.862915] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:04.489 [2024-04-17 06:51:08.862994] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:04.489 [2024-04-17 06:51:08.863025] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:04.489 [2024-04-17 06:51:08.863348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.489 [2024-04-17 06:51:08.863500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.489 [2024-04-17 06:51:08.863526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa37f60 with addr=10.0.0.2, port=4420 00:25:04.489 [2024-04-17 06:51:08.863545] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa37f60 is same with the state(5) to be set 00:25:04.489 [2024-04-17 06:51:08.863670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.489 [2024-04-17 06:51:08.863800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.489 [2024-04-17 06:51:08.863825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc02840 with addr=10.0.0.2, port=4420 00:25:04.489 [2024-04-17 06:51:08.863841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02840 is same with the state(5) to be set 00:25:04.489 [2024-04-17 06:51:08.863964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.489 [2024-04-17 06:51:08.864097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.489 [2024-04-17 06:51:08.864122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa67490 with addr=10.0.0.2, port=4420 00:25:04.489 [2024-04-17 06:51:08.864138] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67490 is same with the state(5) to be set 00:25:04.489 [2024-04-17 06:51:08.864165] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:04.489 [2024-04-17 06:51:08.864185] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:04.489 [2024-04-17 06:51:08.864201] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:04.489 [2024-04-17 06:51:08.865578] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:04.489 [2024-04-17 06:51:08.865607] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:04.489 [2024-04-17 06:51:08.865625] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:04.489 [2024-04-17 06:51:08.865641] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:04.489 [2024-04-17 06:51:08.865820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.489 [2024-04-17 06:51:08.865958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.489 [2024-04-17 06:51:08.865982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf6be0 with addr=10.0.0.2, port=4420 00:25:04.489 [2024-04-17 06:51:08.865997] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf6be0 is same with the state(5) to be set 00:25:04.489 [2024-04-17 06:51:08.866110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.489 [2024-04-17 06:51:08.866238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.489 [2024-04-17 06:51:08.866265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae3fa0 with addr=10.0.0.2, port=4420 00:25:04.489 [2024-04-17 06:51:08.866280] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae3fa0 is same with the state(5) to be set 00:25:04.489 [2024-04-17 06:51:08.866304] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa37f60 (9): Bad file descriptor 00:25:04.489 [2024-04-17 06:51:08.866324] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc02840 (9): Bad file descriptor 00:25:04.489 [2024-04-17 06:51:08.866350] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa67490 (9): Bad file descriptor 00:25:04.489 [2024-04-17 06:51:08.866416] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:04.489 [2024-04-17 06:51:08.866443] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:04.489 [2024-04-17 06:51:08.866469] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:04.489 [2024-04-17 06:51:08.866979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.489 [2024-04-17 06:51:08.867117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.489 [2024-04-17 06:51:08.867142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa670e0 with addr=10.0.0.2, port=4420 00:25:04.489 [2024-04-17 06:51:08.867157] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa670e0 is same with the state(5) to be set 00:25:04.489 [2024-04-17 06:51:08.867308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.489 [2024-04-17 06:51:08.867426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.489 [2024-04-17 06:51:08.867451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3acd0 with addr=10.0.0.2, port=4420 00:25:04.489 [2024-04-17 06:51:08.867466] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3acd0 is same with the state(5) to be set 00:25:04.489 [2024-04-17 06:51:08.867594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.489 [2024-04-17 06:51:08.867716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.489 [2024-04-17 06:51:08.867740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb79900 with addr=10.0.0.2, port=4420 00:25:04.489 [2024-04-17 06:51:08.867755] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb79900 is same with the state(5) to be set 00:25:04.489 [2024-04-17 06:51:08.867773] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf6be0 (9): Bad file descriptor 00:25:04.489 [2024-04-17 06:51:08.867790] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae3fa0 (9): Bad file descriptor 00:25:04.489 [2024-04-17 06:51:08.867805] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:04.489 [2024-04-17 06:51:08.867817] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:04.489 [2024-04-17 06:51:08.867829] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:04.489 [2024-04-17 06:51:08.867847] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:04.489 [2024-04-17 06:51:08.867861] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:04.489 [2024-04-17 06:51:08.867873] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:04.489 [2024-04-17 06:51:08.867888] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:04.489 [2024-04-17 06:51:08.867901] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:04.489 [2024-04-17 06:51:08.867913] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:04.489 [2024-04-17 06:51:08.867995] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:04.489 [2024-04-17 06:51:08.868018] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:04.489 [2024-04-17 06:51:08.868034] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:04.489 [2024-04-17 06:51:08.868045] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:04.489 [2024-04-17 06:51:08.868062] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:04.489 [2024-04-17 06:51:08.868092] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa670e0 (9): Bad file descriptor 00:25:04.489 [2024-04-17 06:51:08.868112] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3acd0 (9): Bad file descriptor 00:25:04.489 [2024-04-17 06:51:08.868128] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb79900 (9): Bad file descriptor 00:25:04.489 [2024-04-17 06:51:08.868143] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:04.489 [2024-04-17 06:51:08.868155] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:04.489 [2024-04-17 06:51:08.868167] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:04.489 [2024-04-17 06:51:08.868192] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:04.489 [2024-04-17 06:51:08.868207] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:04.489 [2024-04-17 06:51:08.868218] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:04.489 [2024-04-17 06:51:08.868255] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:04.489 [2024-04-17 06:51:08.868271] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:04.489 [2024-04-17 06:51:08.868386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.489 [2024-04-17 06:51:08.868525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.489 [2024-04-17 06:51:08.868549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa7c850 with addr=10.0.0.2, port=4420 00:25:04.489 [2024-04-17 06:51:08.868564] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c850 is same with the state(5) to be set 00:25:04.489 [2024-04-17 06:51:08.868679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.489 [2024-04-17 06:51:08.868799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.489 [2024-04-17 06:51:08.868823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5b840 with addr=10.0.0.2, port=4420 00:25:04.489 [2024-04-17 06:51:08.868838] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5b840 is same with the state(5) to be set 00:25:04.490 [2024-04-17 06:51:08.868852] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:04.490 [2024-04-17 06:51:08.868864] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:04.490 [2024-04-17 06:51:08.868877] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:04.490 [2024-04-17 06:51:08.869166] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:04.490 [2024-04-17 06:51:08.869194] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:04.490 [2024-04-17 06:51:08.869207] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:04.490 [2024-04-17 06:51:08.869225] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:04.490 [2024-04-17 06:51:08.869239] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:04.490 [2024-04-17 06:51:08.869252] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:04.490 [2024-04-17 06:51:08.869291] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:04.490 [2024-04-17 06:51:08.869309] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:04.490 [2024-04-17 06:51:08.869325] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:04.490 [2024-04-17 06:51:08.869342] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa7c850 (9): Bad file descriptor 00:25:04.490 [2024-04-17 06:51:08.869360] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5b840 (9): Bad file descriptor 00:25:04.490 [2024-04-17 06:51:08.869398] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:04.490 [2024-04-17 06:51:08.869416] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:04.490 [2024-04-17 06:51:08.869429] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:04.490 [2024-04-17 06:51:08.869444] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:04.490 [2024-04-17 06:51:08.869457] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:04.490 [2024-04-17 06:51:08.869468] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:04.490 [2024-04-17 06:51:08.869504] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:04.490 [2024-04-17 06:51:08.869521] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:04.767 06:51:09 -- target/shutdown.sh@136 -- # nvmfpid= 00:25:04.767 06:51:09 -- target/shutdown.sh@139 -- # sleep 1 00:25:05.705 06:51:10 -- target/shutdown.sh@142 -- # kill -9 59038 00:25:05.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (59038) - No such process 00:25:05.705 06:51:10 -- target/shutdown.sh@142 -- # true 00:25:05.705 06:51:10 -- target/shutdown.sh@144 -- # stoptarget 00:25:05.705 06:51:10 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:05.705 06:51:10 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:05.705 06:51:10 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:05.705 06:51:10 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:05.705 06:51:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:05.705 06:51:10 -- nvmf/common.sh@117 -- # sync 00:25:05.705 06:51:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:05.705 06:51:10 -- nvmf/common.sh@120 -- # set +e 00:25:05.705 06:51:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:05.705 06:51:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:05.964 rmmod nvme_tcp 00:25:05.964 rmmod nvme_fabrics 00:25:05.964 rmmod nvme_keyring 00:25:05.964 06:51:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:05.964 06:51:10 -- nvmf/common.sh@124 -- # set -e 00:25:05.964 06:51:10 -- nvmf/common.sh@125 -- # return 0 00:25:05.964 06:51:10 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:25:05.964 06:51:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:05.964 06:51:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:05.964 06:51:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:05.964 06:51:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:05.964 06:51:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:05.964 06:51:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.964 06:51:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:05.964 06:51:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.865 06:51:12 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:07.865 00:25:07.865 real 0m7.473s 00:25:07.865 user 0m18.382s 00:25:07.865 sys 0m1.438s 00:25:07.865 06:51:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:07.865 06:51:12 -- common/autotest_common.sh@10 -- # set +x 00:25:07.865 ************************************ 00:25:07.865 END TEST nvmf_shutdown_tc3 00:25:07.865 ************************************ 00:25:07.865 06:51:12 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:25:07.865 00:25:07.865 real 0m27.568s 00:25:07.865 user 1m17.107s 00:25:07.865 sys 0m6.428s 00:25:07.866 06:51:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:07.866 06:51:12 -- common/autotest_common.sh@10 -- # set +x 00:25:07.866 ************************************ 00:25:07.866 END TEST nvmf_shutdown 00:25:07.866 ************************************ 00:25:07.866 06:51:12 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:25:07.866 06:51:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:07.866 06:51:12 -- common/autotest_common.sh@10 -- # set +x 00:25:08.124 06:51:12 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:25:08.124 06:51:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:08.124 06:51:12 -- common/autotest_common.sh@10 -- # set +x 00:25:08.124 06:51:12 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:25:08.124 06:51:12 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:08.124 06:51:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:08.124 06:51:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:08.124 06:51:12 -- common/autotest_common.sh@10 -- # set +x 00:25:08.124 ************************************ 00:25:08.124 START TEST nvmf_multicontroller 00:25:08.124 ************************************ 00:25:08.124 06:51:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:08.124 * Looking for test storage... 00:25:08.124 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:08.124 06:51:12 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:08.124 06:51:12 -- nvmf/common.sh@7 -- # uname -s 00:25:08.124 06:51:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:08.124 06:51:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:08.124 06:51:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:08.124 06:51:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:08.124 06:51:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:08.124 06:51:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:08.124 06:51:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:08.124 06:51:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:08.124 06:51:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:08.124 06:51:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:08.124 06:51:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:08.124 06:51:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:08.124 06:51:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:08.124 06:51:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:08.124 06:51:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:08.124 06:51:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:08.124 06:51:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:08.124 06:51:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:08.124 06:51:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:08.124 06:51:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:08.124 06:51:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.124 06:51:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.124 06:51:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.124 06:51:12 -- paths/export.sh@5 -- # export PATH 00:25:08.124 06:51:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.124 06:51:12 -- nvmf/common.sh@47 -- # : 0 00:25:08.124 06:51:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:08.124 06:51:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:08.124 06:51:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:08.124 06:51:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:08.124 06:51:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:08.124 06:51:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:08.124 06:51:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:08.124 06:51:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:08.124 06:51:12 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:08.125 06:51:12 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:08.125 06:51:12 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:08.125 06:51:12 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:08.125 06:51:12 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:08.125 06:51:12 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:08.125 06:51:12 -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:08.125 06:51:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:08.125 06:51:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:08.125 06:51:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:08.125 06:51:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:08.125 06:51:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:08.125 06:51:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.125 06:51:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:08.125 06:51:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.125 06:51:12 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:08.125 06:51:12 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:08.125 06:51:12 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:08.125 06:51:12 -- common/autotest_common.sh@10 -- # set +x 00:25:10.027 06:51:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:10.027 06:51:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:10.027 06:51:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:10.027 06:51:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:10.027 06:51:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:10.027 06:51:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:10.027 06:51:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:10.027 06:51:14 -- nvmf/common.sh@295 -- # net_devs=() 00:25:10.027 06:51:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:10.027 06:51:14 -- nvmf/common.sh@296 -- # e810=() 00:25:10.027 06:51:14 -- nvmf/common.sh@296 -- # local -ga e810 00:25:10.027 06:51:14 -- nvmf/common.sh@297 -- # x722=() 00:25:10.027 06:51:14 -- nvmf/common.sh@297 -- # local -ga x722 00:25:10.027 06:51:14 -- nvmf/common.sh@298 -- # mlx=() 00:25:10.027 06:51:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:10.027 06:51:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:10.027 06:51:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:10.027 06:51:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:10.027 06:51:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:10.027 06:51:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:10.027 06:51:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:10.027 06:51:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:10.027 06:51:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:10.027 06:51:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:10.027 06:51:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:10.027 06:51:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:10.027 06:51:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:10.027 06:51:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:10.027 06:51:14 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:10.027 06:51:14 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:10.027 06:51:14 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:10.027 06:51:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:10.027 06:51:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:10.027 06:51:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:10.028 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:10.028 06:51:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:10.028 06:51:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:10.028 06:51:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.028 06:51:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.028 06:51:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:10.028 06:51:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:10.028 06:51:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:10.028 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:10.028 06:51:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:10.028 06:51:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:10.028 06:51:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.028 06:51:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.028 06:51:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:10.028 06:51:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:10.028 06:51:14 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:10.028 06:51:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:10.028 06:51:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:10.028 06:51:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.028 06:51:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:10.028 06:51:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.028 06:51:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:10.028 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:10.028 06:51:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.028 06:51:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:10.028 06:51:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.028 06:51:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:10.028 06:51:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.028 06:51:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:10.028 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:10.028 06:51:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.028 06:51:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:10.028 06:51:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:10.028 06:51:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:10.028 06:51:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:10.028 06:51:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:10.028 06:51:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:10.028 06:51:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:10.028 06:51:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:10.028 06:51:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:10.028 06:51:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:10.028 06:51:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:10.028 06:51:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:10.028 06:51:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:10.028 06:51:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:10.028 06:51:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:10.028 06:51:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:10.028 06:51:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:10.028 06:51:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:10.028 06:51:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:10.028 06:51:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:10.028 06:51:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:10.028 06:51:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:10.286 06:51:14 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:10.286 06:51:14 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:10.286 06:51:14 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:10.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:10.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:25:10.286 00:25:10.286 --- 10.0.0.2 ping statistics --- 00:25:10.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.286 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:25:10.286 06:51:14 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:10.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:10.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:25:10.286 00:25:10.286 --- 10.0.0.1 ping statistics --- 00:25:10.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.287 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:25:10.287 06:51:14 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:10.287 06:51:14 -- nvmf/common.sh@411 -- # return 0 00:25:10.287 06:51:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:10.287 06:51:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:10.287 06:51:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:10.287 06:51:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:10.287 06:51:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:10.287 06:51:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:10.287 06:51:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:10.287 06:51:14 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:25:10.287 06:51:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:10.287 06:51:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:10.287 06:51:14 -- common/autotest_common.sh@10 -- # set +x 00:25:10.287 06:51:14 -- nvmf/common.sh@470 -- # nvmfpid=62033 00:25:10.287 06:51:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:10.287 06:51:14 -- nvmf/common.sh@471 -- # waitforlisten 62033 00:25:10.287 06:51:14 -- common/autotest_common.sh@817 -- # '[' -z 62033 ']' 00:25:10.287 06:51:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.287 06:51:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:10.287 06:51:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.287 06:51:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:10.287 06:51:14 -- common/autotest_common.sh@10 -- # set +x 00:25:10.287 [2024-04-17 06:51:14.754680] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:25:10.287 [2024-04-17 06:51:14.754765] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:10.287 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.287 [2024-04-17 06:51:14.823719] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:10.546 [2024-04-17 06:51:14.913309] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:10.546 [2024-04-17 06:51:14.913362] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:10.546 [2024-04-17 06:51:14.913378] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:10.546 [2024-04-17 06:51:14.913392] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:10.546 [2024-04-17 06:51:14.913403] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:10.546 [2024-04-17 06:51:14.913490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:10.546 [2024-04-17 06:51:14.917195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:10.546 [2024-04-17 06:51:14.917206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:10.546 06:51:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:10.546 06:51:15 -- common/autotest_common.sh@850 -- # return 0 00:25:10.546 06:51:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:10.546 06:51:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:10.546 06:51:15 -- common/autotest_common.sh@10 -- # set +x 00:25:10.546 06:51:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:10.546 06:51:15 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:10.546 06:51:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.546 06:51:15 -- common/autotest_common.sh@10 -- # set +x 00:25:10.546 [2024-04-17 06:51:15.042898] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:10.546 06:51:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.546 06:51:15 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:10.546 06:51:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.546 06:51:15 -- common/autotest_common.sh@10 -- # set +x 00:25:10.546 Malloc0 00:25:10.546 06:51:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.546 06:51:15 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:10.546 06:51:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.546 06:51:15 -- common/autotest_common.sh@10 -- # set +x 00:25:10.546 06:51:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.546 06:51:15 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:10.546 06:51:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.546 06:51:15 -- common/autotest_common.sh@10 -- # set +x 00:25:10.546 06:51:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.546 06:51:15 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:10.546 06:51:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.546 06:51:15 -- common/autotest_common.sh@10 -- # set +x 00:25:10.546 [2024-04-17 06:51:15.105733] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:10.546 06:51:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.546 06:51:15 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:10.546 06:51:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.546 06:51:15 -- common/autotest_common.sh@10 -- # set +x 00:25:10.546 [2024-04-17 06:51:15.113658] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:10.546 06:51:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.546 06:51:15 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:10.546 06:51:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.546 06:51:15 -- common/autotest_common.sh@10 -- # set +x 00:25:10.546 Malloc1 00:25:10.546 06:51:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.546 06:51:15 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:10.546 06:51:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.546 06:51:15 -- common/autotest_common.sh@10 -- # set +x 00:25:10.546 06:51:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.546 06:51:15 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:25:10.546 06:51:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.546 06:51:15 -- common/autotest_common.sh@10 -- # set +x 00:25:10.805 06:51:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.805 06:51:15 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:10.805 06:51:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.805 06:51:15 -- common/autotest_common.sh@10 -- # set +x 00:25:10.805 06:51:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.805 06:51:15 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:25:10.805 06:51:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.805 06:51:15 -- common/autotest_common.sh@10 -- # set +x 00:25:10.805 06:51:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.805 06:51:15 -- host/multicontroller.sh@44 -- # bdevperf_pid=62093 00:25:10.805 06:51:15 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:25:10.805 06:51:15 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:10.805 06:51:15 -- host/multicontroller.sh@47 -- # waitforlisten 62093 /var/tmp/bdevperf.sock 00:25:10.805 06:51:15 -- common/autotest_common.sh@817 -- # '[' -z 62093 ']' 00:25:10.805 06:51:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:10.805 06:51:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:10.805 06:51:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:10.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:10.805 06:51:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:10.805 06:51:15 -- common/autotest_common.sh@10 -- # set +x 00:25:11.063 06:51:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:11.063 06:51:15 -- common/autotest_common.sh@850 -- # return 0 00:25:11.063 06:51:15 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:11.063 06:51:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.063 06:51:15 -- common/autotest_common.sh@10 -- # set +x 00:25:11.321 NVMe0n1 00:25:11.321 06:51:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.321 06:51:15 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:11.321 06:51:15 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:25:11.322 06:51:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.322 06:51:15 -- common/autotest_common.sh@10 -- # set +x 00:25:11.322 06:51:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.322 1 00:25:11.322 06:51:15 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:11.322 06:51:15 -- common/autotest_common.sh@638 -- # local es=0 00:25:11.322 06:51:15 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:11.322 06:51:15 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:25:11.322 06:51:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:11.322 06:51:15 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:25:11.322 06:51:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:11.322 06:51:15 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:11.322 06:51:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.322 06:51:15 -- common/autotest_common.sh@10 -- # set +x 00:25:11.322 request: 00:25:11.322 { 00:25:11.322 "name": "NVMe0", 00:25:11.322 "trtype": "tcp", 00:25:11.322 "traddr": "10.0.0.2", 00:25:11.322 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:25:11.322 "hostaddr": "10.0.0.2", 00:25:11.322 "hostsvcid": "60000", 00:25:11.322 "adrfam": "ipv4", 00:25:11.322 "trsvcid": "4420", 00:25:11.322 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:11.322 "method": "bdev_nvme_attach_controller", 00:25:11.322 "req_id": 1 00:25:11.322 } 00:25:11.322 Got JSON-RPC error response 00:25:11.322 response: 00:25:11.322 { 00:25:11.322 "code": -114, 00:25:11.322 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:11.322 } 00:25:11.322 06:51:15 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:11.322 06:51:15 -- common/autotest_common.sh@641 -- # es=1 00:25:11.322 06:51:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:11.322 06:51:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:11.322 06:51:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:11.322 06:51:15 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:11.322 06:51:15 -- common/autotest_common.sh@638 -- # local es=0 00:25:11.322 06:51:15 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:11.322 06:51:15 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:25:11.322 06:51:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:11.322 06:51:15 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:25:11.322 06:51:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:11.322 06:51:15 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:11.322 06:51:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.322 06:51:15 -- common/autotest_common.sh@10 -- # set +x 00:25:11.322 request: 00:25:11.322 { 00:25:11.322 "name": "NVMe0", 00:25:11.322 "trtype": "tcp", 00:25:11.322 "traddr": "10.0.0.2", 00:25:11.322 "hostaddr": "10.0.0.2", 00:25:11.322 "hostsvcid": "60000", 00:25:11.322 "adrfam": "ipv4", 00:25:11.322 "trsvcid": "4420", 00:25:11.322 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:11.322 "method": "bdev_nvme_attach_controller", 00:25:11.322 "req_id": 1 00:25:11.322 } 00:25:11.322 Got JSON-RPC error response 00:25:11.322 response: 00:25:11.322 { 00:25:11.322 "code": -114, 00:25:11.322 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:11.322 } 00:25:11.322 06:51:15 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:11.322 06:51:15 -- common/autotest_common.sh@641 -- # es=1 00:25:11.322 06:51:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:11.322 06:51:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:11.322 06:51:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:11.322 06:51:15 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:11.322 06:51:15 -- common/autotest_common.sh@638 -- # local es=0 00:25:11.322 06:51:15 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:11.322 06:51:15 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:25:11.322 06:51:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:11.322 06:51:15 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:25:11.322 06:51:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:11.322 06:51:15 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:11.322 06:51:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.322 06:51:15 -- common/autotest_common.sh@10 -- # set +x 00:25:11.322 request: 00:25:11.322 { 00:25:11.322 "name": "NVMe0", 00:25:11.322 "trtype": "tcp", 00:25:11.322 "traddr": "10.0.0.2", 00:25:11.322 "hostaddr": "10.0.0.2", 00:25:11.322 "hostsvcid": "60000", 00:25:11.322 "adrfam": "ipv4", 00:25:11.322 "trsvcid": "4420", 00:25:11.322 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:11.322 "multipath": "disable", 00:25:11.322 "method": "bdev_nvme_attach_controller", 00:25:11.322 "req_id": 1 00:25:11.322 } 00:25:11.322 Got JSON-RPC error response 00:25:11.322 response: 00:25:11.322 { 00:25:11.322 "code": -114, 00:25:11.322 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:25:11.322 } 00:25:11.322 06:51:15 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:11.322 06:51:15 -- common/autotest_common.sh@641 -- # es=1 00:25:11.322 06:51:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:11.322 06:51:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:11.322 06:51:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:11.322 06:51:15 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:11.322 06:51:15 -- common/autotest_common.sh@638 -- # local es=0 00:25:11.322 06:51:15 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:11.322 06:51:15 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:25:11.322 06:51:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:11.322 06:51:15 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:25:11.322 06:51:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:11.322 06:51:15 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:11.322 06:51:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.322 06:51:15 -- common/autotest_common.sh@10 -- # set +x 00:25:11.322 request: 00:25:11.322 { 00:25:11.322 "name": "NVMe0", 00:25:11.322 "trtype": "tcp", 00:25:11.322 "traddr": "10.0.0.2", 00:25:11.322 "hostaddr": "10.0.0.2", 00:25:11.322 "hostsvcid": "60000", 00:25:11.322 "adrfam": "ipv4", 00:25:11.322 "trsvcid": "4420", 00:25:11.322 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:11.322 "multipath": "failover", 00:25:11.322 "method": "bdev_nvme_attach_controller", 00:25:11.322 "req_id": 1 00:25:11.322 } 00:25:11.322 Got JSON-RPC error response 00:25:11.322 response: 00:25:11.322 { 00:25:11.322 "code": -114, 00:25:11.322 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:11.322 } 00:25:11.322 06:51:15 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:11.322 06:51:15 -- common/autotest_common.sh@641 -- # es=1 00:25:11.322 06:51:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:11.322 06:51:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:11.322 06:51:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:11.322 06:51:15 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:11.322 06:51:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.322 06:51:15 -- common/autotest_common.sh@10 -- # set +x 00:25:11.322 00:25:11.322 06:51:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.322 06:51:15 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:11.322 06:51:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.322 06:51:15 -- common/autotest_common.sh@10 -- # set +x 00:25:11.322 06:51:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.322 06:51:15 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:11.322 06:51:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.323 06:51:15 -- common/autotest_common.sh@10 -- # set +x 00:25:11.580 00:25:11.580 06:51:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.580 06:51:15 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:11.580 06:51:15 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:25:11.580 06:51:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.580 06:51:15 -- common/autotest_common.sh@10 -- # set +x 00:25:11.580 06:51:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.580 06:51:15 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:25:11.580 06:51:15 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:12.513 0 00:25:12.513 06:51:17 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:25:12.513 06:51:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:12.513 06:51:17 -- common/autotest_common.sh@10 -- # set +x 00:25:12.513 06:51:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:12.513 06:51:17 -- host/multicontroller.sh@100 -- # killprocess 62093 00:25:12.513 06:51:17 -- common/autotest_common.sh@936 -- # '[' -z 62093 ']' 00:25:12.513 06:51:17 -- common/autotest_common.sh@940 -- # kill -0 62093 00:25:12.771 06:51:17 -- common/autotest_common.sh@941 -- # uname 00:25:12.771 06:51:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:12.771 06:51:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62093 00:25:12.771 06:51:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:12.771 06:51:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:12.771 06:51:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62093' 00:25:12.771 killing process with pid 62093 00:25:12.771 06:51:17 -- common/autotest_common.sh@955 -- # kill 62093 00:25:12.771 06:51:17 -- common/autotest_common.sh@960 -- # wait 62093 00:25:12.771 06:51:17 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:12.771 06:51:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:12.771 06:51:17 -- common/autotest_common.sh@10 -- # set +x 00:25:13.029 06:51:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:13.029 06:51:17 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:13.029 06:51:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:13.029 06:51:17 -- common/autotest_common.sh@10 -- # set +x 00:25:13.029 06:51:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:13.029 06:51:17 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:25:13.029 06:51:17 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:13.029 06:51:17 -- common/autotest_common.sh@1598 -- # read -r file 00:25:13.029 06:51:17 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:25:13.029 06:51:17 -- common/autotest_common.sh@1597 -- # sort -u 00:25:13.029 06:51:17 -- common/autotest_common.sh@1599 -- # cat 00:25:13.029 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:13.029 [2024-04-17 06:51:15.212004] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:25:13.029 [2024-04-17 06:51:15.212095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62093 ] 00:25:13.029 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.030 [2024-04-17 06:51:15.275419] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.030 [2024-04-17 06:51:15.359616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.030 [2024-04-17 06:51:15.952670] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name da0a92d4-9e6e-4c71-b5a5-8a2da32c97c9 already exists 00:25:13.030 [2024-04-17 06:51:15.952712] bdev.c:7651:bdev_register: *ERROR*: Unable to add uuid:da0a92d4-9e6e-4c71-b5a5-8a2da32c97c9 alias for bdev NVMe1n1 00:25:13.030 [2024-04-17 06:51:15.952745] bdev_nvme.c:4272:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:25:13.030 Running I/O for 1 seconds... 00:25:13.030 00:25:13.030 Latency(us) 00:25:13.030 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.030 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:13.030 NVMe0n1 : 1.00 17427.38 68.08 0.00 0.00 7333.56 2063.17 13204.29 00:25:13.030 =================================================================================================================== 00:25:13.030 Total : 17427.38 68.08 0.00 0.00 7333.56 2063.17 13204.29 00:25:13.030 Received shutdown signal, test time was about 1.000000 seconds 00:25:13.030 00:25:13.030 Latency(us) 00:25:13.030 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.030 =================================================================================================================== 00:25:13.030 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:13.030 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:13.030 06:51:17 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:13.030 06:51:17 -- common/autotest_common.sh@1598 -- # read -r file 00:25:13.030 06:51:17 -- host/multicontroller.sh@108 -- # nvmftestfini 00:25:13.030 06:51:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:13.030 06:51:17 -- nvmf/common.sh@117 -- # sync 00:25:13.030 06:51:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:13.030 06:51:17 -- nvmf/common.sh@120 -- # set +e 00:25:13.030 06:51:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:13.030 06:51:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:13.030 rmmod nvme_tcp 00:25:13.030 rmmod nvme_fabrics 00:25:13.030 rmmod nvme_keyring 00:25:13.030 06:51:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:13.030 06:51:17 -- nvmf/common.sh@124 -- # set -e 00:25:13.030 06:51:17 -- nvmf/common.sh@125 -- # return 0 00:25:13.030 06:51:17 -- nvmf/common.sh@478 -- # '[' -n 62033 ']' 00:25:13.030 06:51:17 -- nvmf/common.sh@479 -- # killprocess 62033 00:25:13.030 06:51:17 -- common/autotest_common.sh@936 -- # '[' -z 62033 ']' 00:25:13.030 06:51:17 -- common/autotest_common.sh@940 -- # kill -0 62033 00:25:13.030 06:51:17 -- common/autotest_common.sh@941 -- # uname 00:25:13.030 06:51:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:13.030 06:51:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62033 00:25:13.030 06:51:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:13.030 06:51:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:13.030 06:51:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62033' 00:25:13.030 killing process with pid 62033 00:25:13.030 06:51:17 -- common/autotest_common.sh@955 -- # kill 62033 00:25:13.030 06:51:17 -- common/autotest_common.sh@960 -- # wait 62033 00:25:13.289 06:51:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:13.289 06:51:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:13.289 06:51:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:13.289 06:51:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:13.289 06:51:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:13.289 06:51:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.289 06:51:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:13.289 06:51:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.821 06:51:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:15.821 00:25:15.821 real 0m7.237s 00:25:15.821 user 0m11.304s 00:25:15.821 sys 0m2.238s 00:25:15.821 06:51:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:15.821 06:51:19 -- common/autotest_common.sh@10 -- # set +x 00:25:15.821 ************************************ 00:25:15.821 END TEST nvmf_multicontroller 00:25:15.821 ************************************ 00:25:15.821 06:51:19 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:15.821 06:51:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:15.821 06:51:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:15.821 06:51:19 -- common/autotest_common.sh@10 -- # set +x 00:25:15.821 ************************************ 00:25:15.821 START TEST nvmf_aer 00:25:15.821 ************************************ 00:25:15.821 06:51:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:15.821 * Looking for test storage... 00:25:15.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:15.821 06:51:19 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:15.821 06:51:19 -- nvmf/common.sh@7 -- # uname -s 00:25:15.821 06:51:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:15.821 06:51:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:15.821 06:51:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:15.821 06:51:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:15.821 06:51:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:15.821 06:51:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:15.821 06:51:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:15.821 06:51:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:15.821 06:51:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:15.821 06:51:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:15.821 06:51:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:15.821 06:51:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:15.821 06:51:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:15.821 06:51:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:15.821 06:51:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:15.821 06:51:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:15.821 06:51:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:15.821 06:51:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:15.821 06:51:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:15.821 06:51:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:15.821 06:51:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.821 06:51:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.821 06:51:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.821 06:51:20 -- paths/export.sh@5 -- # export PATH 00:25:15.821 06:51:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.821 06:51:20 -- nvmf/common.sh@47 -- # : 0 00:25:15.821 06:51:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:15.821 06:51:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:15.821 06:51:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:15.821 06:51:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:15.821 06:51:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:15.821 06:51:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:15.821 06:51:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:15.821 06:51:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:15.821 06:51:20 -- host/aer.sh@11 -- # nvmftestinit 00:25:15.821 06:51:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:15.821 06:51:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:15.821 06:51:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:15.821 06:51:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:15.821 06:51:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:15.821 06:51:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.821 06:51:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:15.821 06:51:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.821 06:51:20 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:15.821 06:51:20 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:15.821 06:51:20 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:15.821 06:51:20 -- common/autotest_common.sh@10 -- # set +x 00:25:17.720 06:51:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:17.720 06:51:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:17.720 06:51:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:17.720 06:51:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:17.720 06:51:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:17.720 06:51:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:17.720 06:51:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:17.720 06:51:21 -- nvmf/common.sh@295 -- # net_devs=() 00:25:17.720 06:51:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:17.720 06:51:21 -- nvmf/common.sh@296 -- # e810=() 00:25:17.720 06:51:21 -- nvmf/common.sh@296 -- # local -ga e810 00:25:17.720 06:51:21 -- nvmf/common.sh@297 -- # x722=() 00:25:17.720 06:51:21 -- nvmf/common.sh@297 -- # local -ga x722 00:25:17.720 06:51:21 -- nvmf/common.sh@298 -- # mlx=() 00:25:17.720 06:51:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:17.720 06:51:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:17.720 06:51:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:17.721 06:51:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:17.721 06:51:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:17.721 06:51:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:17.721 06:51:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:17.721 06:51:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:17.721 06:51:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:17.721 06:51:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:17.721 06:51:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:17.721 06:51:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:17.721 06:51:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:17.721 06:51:21 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:17.721 06:51:21 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:17.721 06:51:21 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:17.721 06:51:21 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:17.721 06:51:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:17.721 06:51:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:17.721 06:51:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:17.721 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:17.721 06:51:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:17.721 06:51:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:17.721 06:51:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.721 06:51:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.721 06:51:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:17.721 06:51:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:17.721 06:51:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:17.721 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:17.721 06:51:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:17.721 06:51:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:17.721 06:51:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.721 06:51:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.721 06:51:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:17.721 06:51:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:17.721 06:51:21 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:17.721 06:51:21 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:17.721 06:51:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:17.721 06:51:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.721 06:51:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:17.721 06:51:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.721 06:51:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:17.721 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:17.721 06:51:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.721 06:51:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:17.721 06:51:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.721 06:51:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:17.721 06:51:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.721 06:51:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:17.721 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:17.721 06:51:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.721 06:51:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:17.721 06:51:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:17.721 06:51:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:17.721 06:51:21 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:17.721 06:51:21 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:17.721 06:51:21 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:17.721 06:51:21 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:17.721 06:51:21 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:17.721 06:51:21 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:17.721 06:51:21 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:17.721 06:51:21 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:17.721 06:51:21 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:17.721 06:51:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:17.721 06:51:21 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:17.721 06:51:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:17.721 06:51:21 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:17.721 06:51:21 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:17.721 06:51:21 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:17.721 06:51:21 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:17.721 06:51:21 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:17.721 06:51:21 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:17.721 06:51:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:17.721 06:51:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:17.721 06:51:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:17.721 06:51:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:17.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:17.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:25:17.721 00:25:17.721 --- 10.0.0.2 ping statistics --- 00:25:17.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.721 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:25:17.721 06:51:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:17.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:17.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:25:17.721 00:25:17.721 --- 10.0.0.1 ping statistics --- 00:25:17.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.721 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:25:17.721 06:51:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:17.721 06:51:22 -- nvmf/common.sh@411 -- # return 0 00:25:17.721 06:51:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:17.721 06:51:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:17.721 06:51:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:17.721 06:51:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:17.721 06:51:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:17.721 06:51:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:17.721 06:51:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:17.721 06:51:22 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:17.721 06:51:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:17.721 06:51:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:17.721 06:51:22 -- common/autotest_common.sh@10 -- # set +x 00:25:17.721 06:51:22 -- nvmf/common.sh@470 -- # nvmfpid=64313 00:25:17.721 06:51:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:17.721 06:51:22 -- nvmf/common.sh@471 -- # waitforlisten 64313 00:25:17.721 06:51:22 -- common/autotest_common.sh@817 -- # '[' -z 64313 ']' 00:25:17.721 06:51:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.721 06:51:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:17.721 06:51:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.721 06:51:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:17.721 06:51:22 -- common/autotest_common.sh@10 -- # set +x 00:25:17.721 [2024-04-17 06:51:22.100236] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:25:17.721 [2024-04-17 06:51:22.100310] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:17.721 EAL: No free 2048 kB hugepages reported on node 1 00:25:17.721 [2024-04-17 06:51:22.171846] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:17.721 [2024-04-17 06:51:22.263630] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:17.721 [2024-04-17 06:51:22.263683] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:17.721 [2024-04-17 06:51:22.263701] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:17.721 [2024-04-17 06:51:22.263715] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:17.721 [2024-04-17 06:51:22.263726] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:17.721 [2024-04-17 06:51:22.263825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:17.721 [2024-04-17 06:51:22.263894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:17.721 [2024-04-17 06:51:22.263926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:17.721 [2024-04-17 06:51:22.263929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.979 06:51:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:17.979 06:51:22 -- common/autotest_common.sh@850 -- # return 0 00:25:17.979 06:51:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:17.979 06:51:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:17.979 06:51:22 -- common/autotest_common.sh@10 -- # set +x 00:25:17.979 06:51:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:17.979 06:51:22 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:17.979 06:51:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:17.979 06:51:22 -- common/autotest_common.sh@10 -- # set +x 00:25:17.979 [2024-04-17 06:51:22.404732] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:17.979 06:51:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:17.979 06:51:22 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:17.979 06:51:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:17.979 06:51:22 -- common/autotest_common.sh@10 -- # set +x 00:25:17.979 Malloc0 00:25:17.979 06:51:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:17.979 06:51:22 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:17.979 06:51:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:17.979 06:51:22 -- common/autotest_common.sh@10 -- # set +x 00:25:17.979 06:51:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:17.979 06:51:22 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:17.980 06:51:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:17.980 06:51:22 -- common/autotest_common.sh@10 -- # set +x 00:25:17.980 06:51:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:17.980 06:51:22 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:17.980 06:51:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:17.980 06:51:22 -- common/autotest_common.sh@10 -- # set +x 00:25:17.980 [2024-04-17 06:51:22.455629] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:17.980 06:51:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:17.980 06:51:22 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:17.980 06:51:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:17.980 06:51:22 -- common/autotest_common.sh@10 -- # set +x 00:25:17.980 [2024-04-17 06:51:22.463370] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:17.980 [ 00:25:17.980 { 00:25:17.980 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:17.980 "subtype": "Discovery", 00:25:17.980 "listen_addresses": [], 00:25:17.980 "allow_any_host": true, 00:25:17.980 "hosts": [] 00:25:17.980 }, 00:25:17.980 { 00:25:17.980 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:17.980 "subtype": "NVMe", 00:25:17.980 "listen_addresses": [ 00:25:17.980 { 00:25:17.980 "transport": "TCP", 00:25:17.980 "trtype": "TCP", 00:25:17.980 "adrfam": "IPv4", 00:25:17.980 "traddr": "10.0.0.2", 00:25:17.980 "trsvcid": "4420" 00:25:17.980 } 00:25:17.980 ], 00:25:17.980 "allow_any_host": true, 00:25:17.980 "hosts": [], 00:25:17.980 "serial_number": "SPDK00000000000001", 00:25:17.980 "model_number": "SPDK bdev Controller", 00:25:17.980 "max_namespaces": 2, 00:25:17.980 "min_cntlid": 1, 00:25:17.980 "max_cntlid": 65519, 00:25:17.980 "namespaces": [ 00:25:17.980 { 00:25:17.980 "nsid": 1, 00:25:17.980 "bdev_name": "Malloc0", 00:25:17.980 "name": "Malloc0", 00:25:17.980 "nguid": "1A7D49E800684AE4AA8AC285134DC9B5", 00:25:17.980 "uuid": "1a7d49e8-0068-4ae4-aa8a-c285134dc9b5" 00:25:17.980 } 00:25:17.980 ] 00:25:17.980 } 00:25:17.980 ] 00:25:17.980 06:51:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:17.980 06:51:22 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:17.980 06:51:22 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:17.980 06:51:22 -- host/aer.sh@33 -- # aerpid=64347 00:25:17.980 06:51:22 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:17.980 06:51:22 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:17.980 06:51:22 -- common/autotest_common.sh@1251 -- # local i=0 00:25:17.980 06:51:22 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:17.980 06:51:22 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:25:17.980 06:51:22 -- common/autotest_common.sh@1254 -- # i=1 00:25:17.980 06:51:22 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:25:17.980 EAL: No free 2048 kB hugepages reported on node 1 00:25:17.980 06:51:22 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:17.980 06:51:22 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:25:17.980 06:51:22 -- common/autotest_common.sh@1254 -- # i=2 00:25:17.980 06:51:22 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:25:18.238 06:51:22 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:18.238 06:51:22 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:18.238 06:51:22 -- common/autotest_common.sh@1262 -- # return 0 00:25:18.238 06:51:22 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:18.238 06:51:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.238 06:51:22 -- common/autotest_common.sh@10 -- # set +x 00:25:18.238 Malloc1 00:25:18.238 06:51:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.238 06:51:22 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:18.238 06:51:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.238 06:51:22 -- common/autotest_common.sh@10 -- # set +x 00:25:18.238 06:51:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.238 06:51:22 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:18.238 06:51:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.238 06:51:22 -- common/autotest_common.sh@10 -- # set +x 00:25:18.238 Asynchronous Event Request test 00:25:18.238 Attaching to 10.0.0.2 00:25:18.238 Attached to 10.0.0.2 00:25:18.238 Registering asynchronous event callbacks... 00:25:18.238 Starting namespace attribute notice tests for all controllers... 00:25:18.238 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:18.238 aer_cb - Changed Namespace 00:25:18.238 Cleaning up... 00:25:18.238 [ 00:25:18.238 { 00:25:18.238 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:18.238 "subtype": "Discovery", 00:25:18.238 "listen_addresses": [], 00:25:18.238 "allow_any_host": true, 00:25:18.238 "hosts": [] 00:25:18.238 }, 00:25:18.238 { 00:25:18.238 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:18.238 "subtype": "NVMe", 00:25:18.238 "listen_addresses": [ 00:25:18.238 { 00:25:18.238 "transport": "TCP", 00:25:18.238 "trtype": "TCP", 00:25:18.238 "adrfam": "IPv4", 00:25:18.238 "traddr": "10.0.0.2", 00:25:18.238 "trsvcid": "4420" 00:25:18.238 } 00:25:18.238 ], 00:25:18.238 "allow_any_host": true, 00:25:18.238 "hosts": [], 00:25:18.238 "serial_number": "SPDK00000000000001", 00:25:18.238 "model_number": "SPDK bdev Controller", 00:25:18.238 "max_namespaces": 2, 00:25:18.238 "min_cntlid": 1, 00:25:18.238 "max_cntlid": 65519, 00:25:18.238 "namespaces": [ 00:25:18.238 { 00:25:18.238 "nsid": 1, 00:25:18.238 "bdev_name": "Malloc0", 00:25:18.238 "name": "Malloc0", 00:25:18.238 "nguid": "1A7D49E800684AE4AA8AC285134DC9B5", 00:25:18.238 "uuid": "1a7d49e8-0068-4ae4-aa8a-c285134dc9b5" 00:25:18.238 }, 00:25:18.238 { 00:25:18.238 "nsid": 2, 00:25:18.238 "bdev_name": "Malloc1", 00:25:18.238 "name": "Malloc1", 00:25:18.238 "nguid": "078614F3BC9C45958FE08ACD463885E4", 00:25:18.238 "uuid": "078614f3-bc9c-4595-8fe0-8acd463885e4" 00:25:18.238 } 00:25:18.238 ] 00:25:18.238 } 00:25:18.238 ] 00:25:18.238 06:51:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.238 06:51:22 -- host/aer.sh@43 -- # wait 64347 00:25:18.238 06:51:22 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:18.238 06:51:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.238 06:51:22 -- common/autotest_common.sh@10 -- # set +x 00:25:18.238 06:51:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.238 06:51:22 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:18.238 06:51:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.238 06:51:22 -- common/autotest_common.sh@10 -- # set +x 00:25:18.238 06:51:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.238 06:51:22 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:18.238 06:51:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.238 06:51:22 -- common/autotest_common.sh@10 -- # set +x 00:25:18.238 06:51:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.238 06:51:22 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:18.238 06:51:22 -- host/aer.sh@51 -- # nvmftestfini 00:25:18.238 06:51:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:18.238 06:51:22 -- nvmf/common.sh@117 -- # sync 00:25:18.238 06:51:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:18.238 06:51:22 -- nvmf/common.sh@120 -- # set +e 00:25:18.238 06:51:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:18.238 06:51:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:18.238 rmmod nvme_tcp 00:25:18.496 rmmod nvme_fabrics 00:25:18.496 rmmod nvme_keyring 00:25:18.496 06:51:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:18.496 06:51:22 -- nvmf/common.sh@124 -- # set -e 00:25:18.496 06:51:22 -- nvmf/common.sh@125 -- # return 0 00:25:18.496 06:51:22 -- nvmf/common.sh@478 -- # '[' -n 64313 ']' 00:25:18.496 06:51:22 -- nvmf/common.sh@479 -- # killprocess 64313 00:25:18.496 06:51:22 -- common/autotest_common.sh@936 -- # '[' -z 64313 ']' 00:25:18.496 06:51:22 -- common/autotest_common.sh@940 -- # kill -0 64313 00:25:18.496 06:51:22 -- common/autotest_common.sh@941 -- # uname 00:25:18.496 06:51:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:18.496 06:51:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64313 00:25:18.496 06:51:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:18.496 06:51:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:18.496 06:51:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64313' 00:25:18.496 killing process with pid 64313 00:25:18.496 06:51:22 -- common/autotest_common.sh@955 -- # kill 64313 00:25:18.496 [2024-04-17 06:51:22.902433] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:18.496 06:51:22 -- common/autotest_common.sh@960 -- # wait 64313 00:25:18.754 06:51:23 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:18.754 06:51:23 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:18.754 06:51:23 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:18.754 06:51:23 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:18.754 06:51:23 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:18.754 06:51:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.754 06:51:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:18.754 06:51:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.657 06:51:25 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:20.657 00:25:20.657 real 0m5.239s 00:25:20.657 user 0m4.060s 00:25:20.657 sys 0m1.808s 00:25:20.657 06:51:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:20.657 06:51:25 -- common/autotest_common.sh@10 -- # set +x 00:25:20.657 ************************************ 00:25:20.657 END TEST nvmf_aer 00:25:20.657 ************************************ 00:25:20.657 06:51:25 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:20.657 06:51:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:20.657 06:51:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:20.657 06:51:25 -- common/autotest_common.sh@10 -- # set +x 00:25:20.915 ************************************ 00:25:20.915 START TEST nvmf_async_init 00:25:20.915 ************************************ 00:25:20.915 06:51:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:20.915 * Looking for test storage... 00:25:20.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:20.915 06:51:25 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:20.915 06:51:25 -- nvmf/common.sh@7 -- # uname -s 00:25:20.915 06:51:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:20.915 06:51:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:20.915 06:51:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:20.915 06:51:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:20.915 06:51:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:20.915 06:51:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:20.915 06:51:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:20.915 06:51:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:20.915 06:51:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:20.915 06:51:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:20.915 06:51:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:20.915 06:51:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:20.915 06:51:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:20.915 06:51:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:20.915 06:51:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:20.915 06:51:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:20.915 06:51:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:20.915 06:51:25 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:20.915 06:51:25 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:20.915 06:51:25 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:20.915 06:51:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.915 06:51:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.915 06:51:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.915 06:51:25 -- paths/export.sh@5 -- # export PATH 00:25:20.915 06:51:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.915 06:51:25 -- nvmf/common.sh@47 -- # : 0 00:25:20.916 06:51:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:20.916 06:51:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:20.916 06:51:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:20.916 06:51:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:20.916 06:51:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:20.916 06:51:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:20.916 06:51:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:20.916 06:51:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:20.916 06:51:25 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:20.916 06:51:25 -- host/async_init.sh@14 -- # null_block_size=512 00:25:20.916 06:51:25 -- host/async_init.sh@15 -- # null_bdev=null0 00:25:20.916 06:51:25 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:20.916 06:51:25 -- host/async_init.sh@20 -- # uuidgen 00:25:20.916 06:51:25 -- host/async_init.sh@20 -- # tr -d - 00:25:20.916 06:51:25 -- host/async_init.sh@20 -- # nguid=f5578f02611e4a17bdae30c5bd357dec 00:25:20.916 06:51:25 -- host/async_init.sh@22 -- # nvmftestinit 00:25:20.916 06:51:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:20.916 06:51:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:20.916 06:51:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:20.916 06:51:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:20.916 06:51:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:20.916 06:51:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.916 06:51:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:20.916 06:51:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.916 06:51:25 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:20.916 06:51:25 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:20.916 06:51:25 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:20.916 06:51:25 -- common/autotest_common.sh@10 -- # set +x 00:25:22.848 06:51:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:22.848 06:51:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:22.848 06:51:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:22.848 06:51:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:22.848 06:51:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:22.848 06:51:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:22.848 06:51:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:22.848 06:51:27 -- nvmf/common.sh@295 -- # net_devs=() 00:25:22.848 06:51:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:22.848 06:51:27 -- nvmf/common.sh@296 -- # e810=() 00:25:22.848 06:51:27 -- nvmf/common.sh@296 -- # local -ga e810 00:25:22.848 06:51:27 -- nvmf/common.sh@297 -- # x722=() 00:25:22.848 06:51:27 -- nvmf/common.sh@297 -- # local -ga x722 00:25:22.848 06:51:27 -- nvmf/common.sh@298 -- # mlx=() 00:25:22.848 06:51:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:22.848 06:51:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:22.848 06:51:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:22.848 06:51:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:22.848 06:51:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:22.848 06:51:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:22.848 06:51:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:22.848 06:51:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:22.848 06:51:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:22.848 06:51:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:22.848 06:51:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:22.848 06:51:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:22.848 06:51:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:22.848 06:51:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:22.848 06:51:27 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:22.848 06:51:27 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:22.848 06:51:27 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:22.848 06:51:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:22.848 06:51:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:22.848 06:51:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:22.848 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:22.848 06:51:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:22.848 06:51:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:22.848 06:51:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.848 06:51:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.848 06:51:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:22.848 06:51:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:22.848 06:51:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:22.848 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:22.848 06:51:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:22.848 06:51:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:22.848 06:51:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.848 06:51:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.848 06:51:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:22.848 06:51:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:22.848 06:51:27 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:22.848 06:51:27 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:22.848 06:51:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:22.848 06:51:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.848 06:51:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:22.848 06:51:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.848 06:51:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:22.848 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:22.848 06:51:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.848 06:51:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:22.848 06:51:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.848 06:51:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:22.848 06:51:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.848 06:51:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:22.848 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:22.848 06:51:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.848 06:51:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:22.848 06:51:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:22.848 06:51:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:22.848 06:51:27 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:22.848 06:51:27 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:22.848 06:51:27 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:22.848 06:51:27 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:22.848 06:51:27 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:22.848 06:51:27 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:22.848 06:51:27 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:22.848 06:51:27 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:22.848 06:51:27 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:22.848 06:51:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:22.848 06:51:27 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:22.848 06:51:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:22.848 06:51:27 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:22.848 06:51:27 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:22.848 06:51:27 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:22.848 06:51:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:22.848 06:51:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:23.107 06:51:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:23.107 06:51:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:23.107 06:51:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:23.107 06:51:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:23.107 06:51:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:23.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:23.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:25:23.107 00:25:23.107 --- 10.0.0.2 ping statistics --- 00:25:23.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.107 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:25:23.107 06:51:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:23.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:23.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:25:23.107 00:25:23.107 --- 10.0.0.1 ping statistics --- 00:25:23.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.107 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:25:23.107 06:51:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:23.107 06:51:27 -- nvmf/common.sh@411 -- # return 0 00:25:23.107 06:51:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:23.107 06:51:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:23.107 06:51:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:23.107 06:51:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:23.107 06:51:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:23.107 06:51:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:23.107 06:51:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:23.107 06:51:27 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:23.107 06:51:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:23.107 06:51:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:23.107 06:51:27 -- common/autotest_common.sh@10 -- # set +x 00:25:23.107 06:51:27 -- nvmf/common.sh@470 -- # nvmfpid=66407 00:25:23.107 06:51:27 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:23.107 06:51:27 -- nvmf/common.sh@471 -- # waitforlisten 66407 00:25:23.107 06:51:27 -- common/autotest_common.sh@817 -- # '[' -z 66407 ']' 00:25:23.107 06:51:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.107 06:51:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:23.107 06:51:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.107 06:51:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:23.107 06:51:27 -- common/autotest_common.sh@10 -- # set +x 00:25:23.107 [2024-04-17 06:51:27.592245] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:25:23.107 [2024-04-17 06:51:27.592334] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:23.107 EAL: No free 2048 kB hugepages reported on node 1 00:25:23.107 [2024-04-17 06:51:27.655158] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.365 [2024-04-17 06:51:27.738129] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:23.365 [2024-04-17 06:51:27.738199] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:23.365 [2024-04-17 06:51:27.738214] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:23.365 [2024-04-17 06:51:27.738240] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:23.365 [2024-04-17 06:51:27.738250] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:23.365 [2024-04-17 06:51:27.738277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.365 06:51:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:23.365 06:51:27 -- common/autotest_common.sh@850 -- # return 0 00:25:23.365 06:51:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:23.365 06:51:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:23.365 06:51:27 -- common/autotest_common.sh@10 -- # set +x 00:25:23.365 06:51:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:23.365 06:51:27 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:23.365 06:51:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.365 06:51:27 -- common/autotest_common.sh@10 -- # set +x 00:25:23.365 [2024-04-17 06:51:27.869106] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:23.365 06:51:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.365 06:51:27 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:23.365 06:51:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.365 06:51:27 -- common/autotest_common.sh@10 -- # set +x 00:25:23.365 null0 00:25:23.365 06:51:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.365 06:51:27 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:23.365 06:51:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.365 06:51:27 -- common/autotest_common.sh@10 -- # set +x 00:25:23.365 06:51:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.365 06:51:27 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:23.365 06:51:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.365 06:51:27 -- common/autotest_common.sh@10 -- # set +x 00:25:23.365 06:51:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.365 06:51:27 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g f5578f02611e4a17bdae30c5bd357dec 00:25:23.365 06:51:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.365 06:51:27 -- common/autotest_common.sh@10 -- # set +x 00:25:23.365 06:51:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.365 06:51:27 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:23.365 06:51:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.365 06:51:27 -- common/autotest_common.sh@10 -- # set +x 00:25:23.365 [2024-04-17 06:51:27.909334] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:23.365 06:51:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.365 06:51:27 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:23.365 06:51:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.365 06:51:27 -- common/autotest_common.sh@10 -- # set +x 00:25:23.623 nvme0n1 00:25:23.623 06:51:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.623 06:51:28 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:23.623 06:51:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.623 06:51:28 -- common/autotest_common.sh@10 -- # set +x 00:25:23.623 [ 00:25:23.623 { 00:25:23.623 "name": "nvme0n1", 00:25:23.623 "aliases": [ 00:25:23.623 "f5578f02-611e-4a17-bdae-30c5bd357dec" 00:25:23.623 ], 00:25:23.623 "product_name": "NVMe disk", 00:25:23.623 "block_size": 512, 00:25:23.623 "num_blocks": 2097152, 00:25:23.623 "uuid": "f5578f02-611e-4a17-bdae-30c5bd357dec", 00:25:23.623 "assigned_rate_limits": { 00:25:23.623 "rw_ios_per_sec": 0, 00:25:23.623 "rw_mbytes_per_sec": 0, 00:25:23.623 "r_mbytes_per_sec": 0, 00:25:23.623 "w_mbytes_per_sec": 0 00:25:23.623 }, 00:25:23.623 "claimed": false, 00:25:23.623 "zoned": false, 00:25:23.623 "supported_io_types": { 00:25:23.623 "read": true, 00:25:23.623 "write": true, 00:25:23.623 "unmap": false, 00:25:23.623 "write_zeroes": true, 00:25:23.623 "flush": true, 00:25:23.623 "reset": true, 00:25:23.623 "compare": true, 00:25:23.623 "compare_and_write": true, 00:25:23.623 "abort": true, 00:25:23.623 "nvme_admin": true, 00:25:23.623 "nvme_io": true 00:25:23.623 }, 00:25:23.623 "memory_domains": [ 00:25:23.623 { 00:25:23.623 "dma_device_id": "system", 00:25:23.623 "dma_device_type": 1 00:25:23.623 } 00:25:23.623 ], 00:25:23.623 "driver_specific": { 00:25:23.623 "nvme": [ 00:25:23.623 { 00:25:23.623 "trid": { 00:25:23.623 "trtype": "TCP", 00:25:23.623 "adrfam": "IPv4", 00:25:23.623 "traddr": "10.0.0.2", 00:25:23.623 "trsvcid": "4420", 00:25:23.623 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:23.623 }, 00:25:23.623 "ctrlr_data": { 00:25:23.623 "cntlid": 1, 00:25:23.623 "vendor_id": "0x8086", 00:25:23.623 "model_number": "SPDK bdev Controller", 00:25:23.623 "serial_number": "00000000000000000000", 00:25:23.623 "firmware_revision": "24.05", 00:25:23.623 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:23.623 "oacs": { 00:25:23.623 "security": 0, 00:25:23.623 "format": 0, 00:25:23.623 "firmware": 0, 00:25:23.623 "ns_manage": 0 00:25:23.623 }, 00:25:23.623 "multi_ctrlr": true, 00:25:23.623 "ana_reporting": false 00:25:23.623 }, 00:25:23.623 "vs": { 00:25:23.623 "nvme_version": "1.3" 00:25:23.623 }, 00:25:23.623 "ns_data": { 00:25:23.623 "id": 1, 00:25:23.623 "can_share": true 00:25:23.623 } 00:25:23.623 } 00:25:23.623 ], 00:25:23.623 "mp_policy": "active_passive" 00:25:23.623 } 00:25:23.623 } 00:25:23.623 ] 00:25:23.623 06:51:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.623 06:51:28 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:23.623 06:51:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.623 06:51:28 -- common/autotest_common.sh@10 -- # set +x 00:25:23.623 [2024-04-17 06:51:28.161915] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:23.623 [2024-04-17 06:51:28.162011] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a6790 (9): Bad file descriptor 00:25:23.881 [2024-04-17 06:51:28.304328] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:23.881 06:51:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.881 06:51:28 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:23.881 06:51:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.881 06:51:28 -- common/autotest_common.sh@10 -- # set +x 00:25:23.881 [ 00:25:23.881 { 00:25:23.881 "name": "nvme0n1", 00:25:23.881 "aliases": [ 00:25:23.881 "f5578f02-611e-4a17-bdae-30c5bd357dec" 00:25:23.881 ], 00:25:23.881 "product_name": "NVMe disk", 00:25:23.881 "block_size": 512, 00:25:23.881 "num_blocks": 2097152, 00:25:23.881 "uuid": "f5578f02-611e-4a17-bdae-30c5bd357dec", 00:25:23.881 "assigned_rate_limits": { 00:25:23.881 "rw_ios_per_sec": 0, 00:25:23.881 "rw_mbytes_per_sec": 0, 00:25:23.881 "r_mbytes_per_sec": 0, 00:25:23.881 "w_mbytes_per_sec": 0 00:25:23.881 }, 00:25:23.881 "claimed": false, 00:25:23.881 "zoned": false, 00:25:23.881 "supported_io_types": { 00:25:23.881 "read": true, 00:25:23.881 "write": true, 00:25:23.881 "unmap": false, 00:25:23.881 "write_zeroes": true, 00:25:23.881 "flush": true, 00:25:23.881 "reset": true, 00:25:23.881 "compare": true, 00:25:23.881 "compare_and_write": true, 00:25:23.881 "abort": true, 00:25:23.881 "nvme_admin": true, 00:25:23.881 "nvme_io": true 00:25:23.881 }, 00:25:23.881 "memory_domains": [ 00:25:23.881 { 00:25:23.881 "dma_device_id": "system", 00:25:23.881 "dma_device_type": 1 00:25:23.881 } 00:25:23.881 ], 00:25:23.881 "driver_specific": { 00:25:23.881 "nvme": [ 00:25:23.881 { 00:25:23.881 "trid": { 00:25:23.881 "trtype": "TCP", 00:25:23.881 "adrfam": "IPv4", 00:25:23.881 "traddr": "10.0.0.2", 00:25:23.881 "trsvcid": "4420", 00:25:23.881 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:23.881 }, 00:25:23.881 "ctrlr_data": { 00:25:23.881 "cntlid": 2, 00:25:23.881 "vendor_id": "0x8086", 00:25:23.881 "model_number": "SPDK bdev Controller", 00:25:23.881 "serial_number": "00000000000000000000", 00:25:23.881 "firmware_revision": "24.05", 00:25:23.881 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:23.881 "oacs": { 00:25:23.881 "security": 0, 00:25:23.881 "format": 0, 00:25:23.881 "firmware": 0, 00:25:23.881 "ns_manage": 0 00:25:23.881 }, 00:25:23.881 "multi_ctrlr": true, 00:25:23.881 "ana_reporting": false 00:25:23.881 }, 00:25:23.881 "vs": { 00:25:23.881 "nvme_version": "1.3" 00:25:23.881 }, 00:25:23.881 "ns_data": { 00:25:23.881 "id": 1, 00:25:23.881 "can_share": true 00:25:23.881 } 00:25:23.881 } 00:25:23.881 ], 00:25:23.881 "mp_policy": "active_passive" 00:25:23.881 } 00:25:23.881 } 00:25:23.881 ] 00:25:23.881 06:51:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.881 06:51:28 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.881 06:51:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.881 06:51:28 -- common/autotest_common.sh@10 -- # set +x 00:25:23.881 06:51:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.881 06:51:28 -- host/async_init.sh@53 -- # mktemp 00:25:23.881 06:51:28 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.ZbjDrWlnG0 00:25:23.881 06:51:28 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:23.881 06:51:28 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.ZbjDrWlnG0 00:25:23.882 06:51:28 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:23.882 06:51:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.882 06:51:28 -- common/autotest_common.sh@10 -- # set +x 00:25:23.882 06:51:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.882 06:51:28 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:23.882 06:51:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.882 06:51:28 -- common/autotest_common.sh@10 -- # set +x 00:25:23.882 [2024-04-17 06:51:28.354567] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:23.882 [2024-04-17 06:51:28.354699] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:23.882 06:51:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.882 06:51:28 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZbjDrWlnG0 00:25:23.882 06:51:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.882 06:51:28 -- common/autotest_common.sh@10 -- # set +x 00:25:23.882 [2024-04-17 06:51:28.362595] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:23.882 06:51:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.882 06:51:28 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZbjDrWlnG0 00:25:23.882 06:51:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.882 06:51:28 -- common/autotest_common.sh@10 -- # set +x 00:25:23.882 [2024-04-17 06:51:28.370604] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:23.882 [2024-04-17 06:51:28.370663] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:23.882 nvme0n1 00:25:23.882 06:51:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.882 06:51:28 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:23.882 06:51:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.882 06:51:28 -- common/autotest_common.sh@10 -- # set +x 00:25:23.882 [ 00:25:23.882 { 00:25:23.882 "name": "nvme0n1", 00:25:23.882 "aliases": [ 00:25:23.882 "f5578f02-611e-4a17-bdae-30c5bd357dec" 00:25:23.882 ], 00:25:23.882 "product_name": "NVMe disk", 00:25:23.882 "block_size": 512, 00:25:23.882 "num_blocks": 2097152, 00:25:23.882 "uuid": "f5578f02-611e-4a17-bdae-30c5bd357dec", 00:25:23.882 "assigned_rate_limits": { 00:25:23.882 "rw_ios_per_sec": 0, 00:25:23.882 "rw_mbytes_per_sec": 0, 00:25:23.882 "r_mbytes_per_sec": 0, 00:25:23.882 "w_mbytes_per_sec": 0 00:25:23.882 }, 00:25:23.882 "claimed": false, 00:25:23.882 "zoned": false, 00:25:23.882 "supported_io_types": { 00:25:23.882 "read": true, 00:25:23.882 "write": true, 00:25:23.882 "unmap": false, 00:25:23.882 "write_zeroes": true, 00:25:23.882 "flush": true, 00:25:23.882 "reset": true, 00:25:23.882 "compare": true, 00:25:23.882 "compare_and_write": true, 00:25:23.882 "abort": true, 00:25:23.882 "nvme_admin": true, 00:25:23.882 "nvme_io": true 00:25:23.882 }, 00:25:23.882 "memory_domains": [ 00:25:23.882 { 00:25:23.882 "dma_device_id": "system", 00:25:23.882 "dma_device_type": 1 00:25:23.882 } 00:25:23.882 ], 00:25:23.882 "driver_specific": { 00:25:23.882 "nvme": [ 00:25:23.882 { 00:25:23.882 "trid": { 00:25:23.882 "trtype": "TCP", 00:25:23.882 "adrfam": "IPv4", 00:25:23.882 "traddr": "10.0.0.2", 00:25:23.882 "trsvcid": "4421", 00:25:23.882 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:23.882 }, 00:25:23.882 "ctrlr_data": { 00:25:23.882 "cntlid": 3, 00:25:23.882 "vendor_id": "0x8086", 00:25:23.882 "model_number": "SPDK bdev Controller", 00:25:23.882 "serial_number": "00000000000000000000", 00:25:23.882 "firmware_revision": "24.05", 00:25:23.882 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:23.882 "oacs": { 00:25:23.882 "security": 0, 00:25:23.882 "format": 0, 00:25:23.882 "firmware": 0, 00:25:23.882 "ns_manage": 0 00:25:23.882 }, 00:25:23.882 "multi_ctrlr": true, 00:25:23.882 "ana_reporting": false 00:25:23.882 }, 00:25:23.882 "vs": { 00:25:23.882 "nvme_version": "1.3" 00:25:23.882 }, 00:25:23.882 "ns_data": { 00:25:23.882 "id": 1, 00:25:23.882 "can_share": true 00:25:23.882 } 00:25:23.882 } 00:25:23.882 ], 00:25:23.882 "mp_policy": "active_passive" 00:25:23.882 } 00:25:23.882 } 00:25:23.882 ] 00:25:23.882 06:51:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.882 06:51:28 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.882 06:51:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.882 06:51:28 -- common/autotest_common.sh@10 -- # set +x 00:25:23.882 06:51:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.882 06:51:28 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.ZbjDrWlnG0 00:25:23.882 06:51:28 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:23.882 06:51:28 -- host/async_init.sh@78 -- # nvmftestfini 00:25:23.882 06:51:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:23.882 06:51:28 -- nvmf/common.sh@117 -- # sync 00:25:23.882 06:51:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:23.882 06:51:28 -- nvmf/common.sh@120 -- # set +e 00:25:23.882 06:51:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:23.882 06:51:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:23.882 rmmod nvme_tcp 00:25:24.140 rmmod nvme_fabrics 00:25:24.140 rmmod nvme_keyring 00:25:24.140 06:51:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:24.140 06:51:28 -- nvmf/common.sh@124 -- # set -e 00:25:24.140 06:51:28 -- nvmf/common.sh@125 -- # return 0 00:25:24.140 06:51:28 -- nvmf/common.sh@478 -- # '[' -n 66407 ']' 00:25:24.140 06:51:28 -- nvmf/common.sh@479 -- # killprocess 66407 00:25:24.140 06:51:28 -- common/autotest_common.sh@936 -- # '[' -z 66407 ']' 00:25:24.140 06:51:28 -- common/autotest_common.sh@940 -- # kill -0 66407 00:25:24.140 06:51:28 -- common/autotest_common.sh@941 -- # uname 00:25:24.140 06:51:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:24.140 06:51:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66407 00:25:24.140 06:51:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:24.140 06:51:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:24.140 06:51:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66407' 00:25:24.141 killing process with pid 66407 00:25:24.141 06:51:28 -- common/autotest_common.sh@955 -- # kill 66407 00:25:24.141 [2024-04-17 06:51:28.553800] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:24.141 [2024-04-17 06:51:28.553838] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:24.141 06:51:28 -- common/autotest_common.sh@960 -- # wait 66407 00:25:24.399 06:51:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:24.399 06:51:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:24.399 06:51:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:24.399 06:51:28 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:24.399 06:51:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:24.399 06:51:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.399 06:51:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:24.399 06:51:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.299 06:51:30 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:26.299 00:25:26.299 real 0m5.499s 00:25:26.299 user 0m2.096s 00:25:26.299 sys 0m1.765s 00:25:26.299 06:51:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:26.299 06:51:30 -- common/autotest_common.sh@10 -- # set +x 00:25:26.299 ************************************ 00:25:26.299 END TEST nvmf_async_init 00:25:26.299 ************************************ 00:25:26.299 06:51:30 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:26.299 06:51:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:26.299 06:51:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:26.299 06:51:30 -- common/autotest_common.sh@10 -- # set +x 00:25:26.557 ************************************ 00:25:26.557 START TEST dma 00:25:26.557 ************************************ 00:25:26.557 06:51:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:26.557 * Looking for test storage... 00:25:26.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:26.557 06:51:30 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:26.557 06:51:30 -- nvmf/common.sh@7 -- # uname -s 00:25:26.557 06:51:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:26.557 06:51:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:26.557 06:51:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:26.557 06:51:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:26.557 06:51:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:26.557 06:51:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:26.557 06:51:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:26.557 06:51:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:26.557 06:51:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:26.557 06:51:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:26.557 06:51:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:26.557 06:51:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:26.557 06:51:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:26.557 06:51:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:26.557 06:51:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:26.557 06:51:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:26.557 06:51:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:26.557 06:51:31 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:26.557 06:51:31 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:26.557 06:51:31 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:26.557 06:51:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.557 06:51:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.557 06:51:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.557 06:51:31 -- paths/export.sh@5 -- # export PATH 00:25:26.557 06:51:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.557 06:51:31 -- nvmf/common.sh@47 -- # : 0 00:25:26.557 06:51:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:26.557 06:51:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:26.557 06:51:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:26.557 06:51:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:26.557 06:51:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:26.557 06:51:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:26.557 06:51:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:26.557 06:51:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:26.557 06:51:31 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:26.557 06:51:31 -- host/dma.sh@13 -- # exit 0 00:25:26.557 00:25:26.557 real 0m0.069s 00:25:26.557 user 0m0.024s 00:25:26.557 sys 0m0.050s 00:25:26.557 06:51:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:26.557 06:51:31 -- common/autotest_common.sh@10 -- # set +x 00:25:26.557 ************************************ 00:25:26.557 END TEST dma 00:25:26.557 ************************************ 00:25:26.557 06:51:31 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:26.557 06:51:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:26.557 06:51:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:26.557 06:51:31 -- common/autotest_common.sh@10 -- # set +x 00:25:26.557 ************************************ 00:25:26.557 START TEST nvmf_identify 00:25:26.557 ************************************ 00:25:26.557 06:51:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:26.815 * Looking for test storage... 00:25:26.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:26.815 06:51:31 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:26.815 06:51:31 -- nvmf/common.sh@7 -- # uname -s 00:25:26.815 06:51:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:26.815 06:51:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:26.815 06:51:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:26.815 06:51:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:26.815 06:51:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:26.815 06:51:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:26.815 06:51:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:26.816 06:51:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:26.816 06:51:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:26.816 06:51:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:26.816 06:51:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:26.816 06:51:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:26.816 06:51:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:26.816 06:51:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:26.816 06:51:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:26.816 06:51:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:26.816 06:51:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:26.816 06:51:31 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:26.816 06:51:31 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:26.816 06:51:31 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:26.816 06:51:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.816 06:51:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.816 06:51:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.816 06:51:31 -- paths/export.sh@5 -- # export PATH 00:25:26.816 06:51:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.816 06:51:31 -- nvmf/common.sh@47 -- # : 0 00:25:26.816 06:51:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:26.816 06:51:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:26.816 06:51:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:26.816 06:51:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:26.816 06:51:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:26.816 06:51:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:26.816 06:51:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:26.816 06:51:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:26.816 06:51:31 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:26.816 06:51:31 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:26.816 06:51:31 -- host/identify.sh@14 -- # nvmftestinit 00:25:26.816 06:51:31 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:26.816 06:51:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:26.816 06:51:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:26.816 06:51:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:26.816 06:51:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:26.816 06:51:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.816 06:51:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:26.816 06:51:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.816 06:51:31 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:26.816 06:51:31 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:26.816 06:51:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:26.816 06:51:31 -- common/autotest_common.sh@10 -- # set +x 00:25:28.717 06:51:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:28.717 06:51:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:28.717 06:51:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:28.717 06:51:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:28.717 06:51:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:28.717 06:51:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:28.717 06:51:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:28.717 06:51:33 -- nvmf/common.sh@295 -- # net_devs=() 00:25:28.717 06:51:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:28.717 06:51:33 -- nvmf/common.sh@296 -- # e810=() 00:25:28.717 06:51:33 -- nvmf/common.sh@296 -- # local -ga e810 00:25:28.717 06:51:33 -- nvmf/common.sh@297 -- # x722=() 00:25:28.717 06:51:33 -- nvmf/common.sh@297 -- # local -ga x722 00:25:28.717 06:51:33 -- nvmf/common.sh@298 -- # mlx=() 00:25:28.717 06:51:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:28.717 06:51:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:28.717 06:51:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:28.717 06:51:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:28.717 06:51:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:28.717 06:51:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:28.717 06:51:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:28.717 06:51:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:28.717 06:51:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:28.717 06:51:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:28.717 06:51:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:28.717 06:51:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:28.717 06:51:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:28.717 06:51:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:28.717 06:51:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:28.717 06:51:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:28.717 06:51:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:28.717 06:51:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:28.717 06:51:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:28.717 06:51:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:28.717 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:28.718 06:51:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:28.718 06:51:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:28.718 06:51:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.718 06:51:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.718 06:51:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:28.718 06:51:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:28.718 06:51:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:28.718 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:28.718 06:51:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:28.718 06:51:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:28.718 06:51:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.718 06:51:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.718 06:51:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:28.718 06:51:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:28.718 06:51:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:28.718 06:51:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:28.718 06:51:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:28.718 06:51:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.718 06:51:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:28.718 06:51:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.718 06:51:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:28.718 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:28.718 06:51:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.718 06:51:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:28.718 06:51:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.718 06:51:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:28.718 06:51:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.718 06:51:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:28.718 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:28.718 06:51:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.718 06:51:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:28.718 06:51:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:28.718 06:51:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:28.718 06:51:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:28.718 06:51:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:28.718 06:51:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:28.718 06:51:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:28.718 06:51:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:28.718 06:51:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:28.718 06:51:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:28.718 06:51:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:28.718 06:51:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:28.718 06:51:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:28.718 06:51:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:28.718 06:51:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:28.718 06:51:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:28.718 06:51:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:28.718 06:51:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:28.718 06:51:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:28.718 06:51:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:28.718 06:51:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:28.718 06:51:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:28.718 06:51:33 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:28.718 06:51:33 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:28.718 06:51:33 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:28.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:28.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:25:28.718 00:25:28.718 --- 10.0.0.2 ping statistics --- 00:25:28.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.718 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:25:28.718 06:51:33 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:28.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:28.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:25:28.718 00:25:28.718 --- 10.0.0.1 ping statistics --- 00:25:28.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.718 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:25:28.718 06:51:33 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:28.718 06:51:33 -- nvmf/common.sh@411 -- # return 0 00:25:28.718 06:51:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:28.718 06:51:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:28.718 06:51:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:28.718 06:51:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:28.718 06:51:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:28.718 06:51:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:28.718 06:51:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:28.718 06:51:33 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:28.718 06:51:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:28.718 06:51:33 -- common/autotest_common.sh@10 -- # set +x 00:25:28.976 06:51:33 -- host/identify.sh@19 -- # nvmfpid=68543 00:25:28.976 06:51:33 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:28.976 06:51:33 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:28.976 06:51:33 -- host/identify.sh@23 -- # waitforlisten 68543 00:25:28.976 06:51:33 -- common/autotest_common.sh@817 -- # '[' -z 68543 ']' 00:25:28.976 06:51:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.976 06:51:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:28.976 06:51:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.976 06:51:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:28.976 06:51:33 -- common/autotest_common.sh@10 -- # set +x 00:25:28.976 [2024-04-17 06:51:33.368153] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:25:28.976 [2024-04-17 06:51:33.368274] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:28.976 EAL: No free 2048 kB hugepages reported on node 1 00:25:28.976 [2024-04-17 06:51:33.431992] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:28.976 [2024-04-17 06:51:33.518936] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:28.976 [2024-04-17 06:51:33.518986] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:28.976 [2024-04-17 06:51:33.519001] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:28.976 [2024-04-17 06:51:33.519013] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:28.976 [2024-04-17 06:51:33.519023] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:28.976 [2024-04-17 06:51:33.519111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.976 [2024-04-17 06:51:33.519198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:28.976 [2024-04-17 06:51:33.519406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:28.976 [2024-04-17 06:51:33.519408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.235 06:51:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:29.235 06:51:33 -- common/autotest_common.sh@850 -- # return 0 00:25:29.235 06:51:33 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:29.235 06:51:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.235 06:51:33 -- common/autotest_common.sh@10 -- # set +x 00:25:29.235 [2024-04-17 06:51:33.641650] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:29.235 06:51:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.235 06:51:33 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:29.235 06:51:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:29.235 06:51:33 -- common/autotest_common.sh@10 -- # set +x 00:25:29.235 06:51:33 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:29.235 06:51:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.235 06:51:33 -- common/autotest_common.sh@10 -- # set +x 00:25:29.235 Malloc0 00:25:29.235 06:51:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.235 06:51:33 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:29.235 06:51:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.235 06:51:33 -- common/autotest_common.sh@10 -- # set +x 00:25:29.235 06:51:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.235 06:51:33 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:29.235 06:51:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.235 06:51:33 -- common/autotest_common.sh@10 -- # set +x 00:25:29.235 06:51:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.235 06:51:33 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:29.235 06:51:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.235 06:51:33 -- common/autotest_common.sh@10 -- # set +x 00:25:29.235 [2024-04-17 06:51:33.712539] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:29.235 06:51:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.235 06:51:33 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:29.235 06:51:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.235 06:51:33 -- common/autotest_common.sh@10 -- # set +x 00:25:29.235 06:51:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.235 06:51:33 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:29.235 06:51:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.236 06:51:33 -- common/autotest_common.sh@10 -- # set +x 00:25:29.236 [2024-04-17 06:51:33.728267] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:29.236 [ 00:25:29.236 { 00:25:29.236 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:29.236 "subtype": "Discovery", 00:25:29.236 "listen_addresses": [ 00:25:29.236 { 00:25:29.236 "transport": "TCP", 00:25:29.236 "trtype": "TCP", 00:25:29.236 "adrfam": "IPv4", 00:25:29.236 "traddr": "10.0.0.2", 00:25:29.236 "trsvcid": "4420" 00:25:29.236 } 00:25:29.236 ], 00:25:29.236 "allow_any_host": true, 00:25:29.236 "hosts": [] 00:25:29.236 }, 00:25:29.236 { 00:25:29.236 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:29.236 "subtype": "NVMe", 00:25:29.236 "listen_addresses": [ 00:25:29.236 { 00:25:29.236 "transport": "TCP", 00:25:29.236 "trtype": "TCP", 00:25:29.236 "adrfam": "IPv4", 00:25:29.236 "traddr": "10.0.0.2", 00:25:29.236 "trsvcid": "4420" 00:25:29.236 } 00:25:29.236 ], 00:25:29.236 "allow_any_host": true, 00:25:29.236 "hosts": [], 00:25:29.236 "serial_number": "SPDK00000000000001", 00:25:29.236 "model_number": "SPDK bdev Controller", 00:25:29.236 "max_namespaces": 32, 00:25:29.236 "min_cntlid": 1, 00:25:29.236 "max_cntlid": 65519, 00:25:29.236 "namespaces": [ 00:25:29.236 { 00:25:29.236 "nsid": 1, 00:25:29.236 "bdev_name": "Malloc0", 00:25:29.236 "name": "Malloc0", 00:25:29.236 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:29.236 "eui64": "ABCDEF0123456789", 00:25:29.236 "uuid": "3f66b2af-7eb4-4bf6-9429-6f8c7aee268f" 00:25:29.236 } 00:25:29.236 ] 00:25:29.236 } 00:25:29.236 ] 00:25:29.236 06:51:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.236 06:51:33 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:29.236 [2024-04-17 06:51:33.750004] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:25:29.236 [2024-04-17 06:51:33.750041] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68576 ] 00:25:29.236 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.236 [2024-04-17 06:51:33.784417] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:29.236 [2024-04-17 06:51:33.784482] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:29.236 [2024-04-17 06:51:33.784492] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:29.236 [2024-04-17 06:51:33.784506] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:29.236 [2024-04-17 06:51:33.784533] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:29.236 [2024-04-17 06:51:33.784878] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:29.236 [2024-04-17 06:51:33.784931] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2044af0 0 00:25:29.236 [2024-04-17 06:51:33.799197] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:29.236 [2024-04-17 06:51:33.799217] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:29.236 [2024-04-17 06:51:33.799225] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:29.236 [2024-04-17 06:51:33.799230] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:29.236 [2024-04-17 06:51:33.799293] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.236 [2024-04-17 06:51:33.799306] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.236 [2024-04-17 06:51:33.799312] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2044af0) 00:25:29.236 [2024-04-17 06:51:33.799334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:29.236 [2024-04-17 06:51:33.799362] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af470, cid 0, qid 0 00:25:29.236 [2024-04-17 06:51:33.807203] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.236 [2024-04-17 06:51:33.807220] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.236 [2024-04-17 06:51:33.807227] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.236 [2024-04-17 06:51:33.807235] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af470) on tqpair=0x2044af0 00:25:29.236 [2024-04-17 06:51:33.807251] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:29.236 [2024-04-17 06:51:33.807276] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:29.236 [2024-04-17 06:51:33.807285] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:29.236 [2024-04-17 06:51:33.807305] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.236 [2024-04-17 06:51:33.807314] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.236 [2024-04-17 06:51:33.807321] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2044af0) 00:25:29.236 [2024-04-17 06:51:33.807332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.236 [2024-04-17 06:51:33.807357] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af470, cid 0, qid 0 00:25:29.236 [2024-04-17 06:51:33.807535] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.236 [2024-04-17 06:51:33.807547] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.236 [2024-04-17 06:51:33.807554] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.236 [2024-04-17 06:51:33.807561] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af470) on tqpair=0x2044af0 00:25:29.236 [2024-04-17 06:51:33.807571] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:29.236 [2024-04-17 06:51:33.807584] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:29.236 [2024-04-17 06:51:33.807596] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.236 [2024-04-17 06:51:33.807603] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.236 [2024-04-17 06:51:33.807609] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2044af0) 00:25:29.236 [2024-04-17 06:51:33.807620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.236 [2024-04-17 06:51:33.807641] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af470, cid 0, qid 0 00:25:29.236 [2024-04-17 06:51:33.807777] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.236 [2024-04-17 06:51:33.807792] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.236 [2024-04-17 06:51:33.807799] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.236 [2024-04-17 06:51:33.807805] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af470) on tqpair=0x2044af0 00:25:29.236 [2024-04-17 06:51:33.807815] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:29.236 [2024-04-17 06:51:33.807828] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:29.236 [2024-04-17 06:51:33.807840] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.236 [2024-04-17 06:51:33.807847] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.236 [2024-04-17 06:51:33.807853] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2044af0) 00:25:29.236 [2024-04-17 06:51:33.807868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.236 [2024-04-17 06:51:33.807890] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af470, cid 0, qid 0 00:25:29.236 [2024-04-17 06:51:33.808018] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.236 [2024-04-17 06:51:33.808030] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.236 [2024-04-17 06:51:33.808037] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.236 [2024-04-17 06:51:33.808043] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af470) on tqpair=0x2044af0 00:25:29.236 [2024-04-17 06:51:33.808053] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:29.236 [2024-04-17 06:51:33.808069] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.236 [2024-04-17 06:51:33.808078] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.236 [2024-04-17 06:51:33.808084] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2044af0) 00:25:29.236 [2024-04-17 06:51:33.808094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.236 [2024-04-17 06:51:33.808114] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af470, cid 0, qid 0 00:25:29.236 [2024-04-17 06:51:33.808244] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.236 [2024-04-17 06:51:33.808257] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.236 [2024-04-17 06:51:33.808264] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.236 [2024-04-17 06:51:33.808271] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af470) on tqpair=0x2044af0 00:25:29.236 [2024-04-17 06:51:33.808280] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:29.236 [2024-04-17 06:51:33.808289] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:29.236 [2024-04-17 06:51:33.808302] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:29.236 [2024-04-17 06:51:33.808411] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:29.236 [2024-04-17 06:51:33.808419] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:29.236 [2024-04-17 06:51:33.808431] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.237 [2024-04-17 06:51:33.808439] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.237 [2024-04-17 06:51:33.808445] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2044af0) 00:25:29.237 [2024-04-17 06:51:33.808455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.237 [2024-04-17 06:51:33.808476] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af470, cid 0, qid 0 00:25:29.237 [2024-04-17 06:51:33.808634] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.237 [2024-04-17 06:51:33.808650] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.237 [2024-04-17 06:51:33.808656] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.237 [2024-04-17 06:51:33.808663] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af470) on tqpair=0x2044af0 00:25:29.237 [2024-04-17 06:51:33.808673] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:29.237 [2024-04-17 06:51:33.808689] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.237 [2024-04-17 06:51:33.808702] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.237 [2024-04-17 06:51:33.808709] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2044af0) 00:25:29.237 [2024-04-17 06:51:33.808719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.237 [2024-04-17 06:51:33.808740] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af470, cid 0, qid 0 00:25:29.237 [2024-04-17 06:51:33.808866] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.237 [2024-04-17 06:51:33.808881] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.237 [2024-04-17 06:51:33.808887] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.237 [2024-04-17 06:51:33.808894] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af470) on tqpair=0x2044af0 00:25:29.237 [2024-04-17 06:51:33.808903] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:29.237 [2024-04-17 06:51:33.808912] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:29.237 [2024-04-17 06:51:33.808924] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:29.237 [2024-04-17 06:51:33.808939] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:29.237 [2024-04-17 06:51:33.808956] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.237 [2024-04-17 06:51:33.808964] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2044af0) 00:25:29.237 [2024-04-17 06:51:33.808975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.237 [2024-04-17 06:51:33.808995] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af470, cid 0, qid 0 00:25:29.237 [2024-04-17 06:51:33.809150] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:29.237 [2024-04-17 06:51:33.809162] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:29.237 [2024-04-17 06:51:33.809169] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:29.237 [2024-04-17 06:51:33.809183] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2044af0): datao=0, datal=4096, cccid=0 00:25:29.237 [2024-04-17 06:51:33.809191] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20af470) on tqpair(0x2044af0): expected_datao=0, payload_size=4096 00:25:29.237 [2024-04-17 06:51:33.809199] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.237 [2024-04-17 06:51:33.809220] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:29.237 [2024-04-17 06:51:33.809232] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:29.237 [2024-04-17 06:51:33.809299] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.237 [2024-04-17 06:51:33.809311] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.237 [2024-04-17 06:51:33.809318] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.237 [2024-04-17 06:51:33.809324] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af470) on tqpair=0x2044af0 00:25:29.237 [2024-04-17 06:51:33.809337] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:29.237 [2024-04-17 06:51:33.809346] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:29.237 [2024-04-17 06:51:33.809353] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:29.237 [2024-04-17 06:51:33.809361] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:29.237 [2024-04-17 06:51:33.809369] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:29.237 [2024-04-17 06:51:33.809381] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:29.237 [2024-04-17 06:51:33.809396] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:29.237 [2024-04-17 06:51:33.809408] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.237 [2024-04-17 06:51:33.809415] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.237 [2024-04-17 06:51:33.809422] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2044af0) 00:25:29.237 [2024-04-17 06:51:33.809433] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:29.237 [2024-04-17 06:51:33.809454] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af470, cid 0, qid 0 00:25:29.237 [2024-04-17 06:51:33.809587] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.237 [2024-04-17 06:51:33.809602] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.237 [2024-04-17 06:51:33.809609] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.237 [2024-04-17 06:51:33.809616] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af470) on tqpair=0x2044af0 00:25:29.237 [2024-04-17 06:51:33.809628] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.237 [2024-04-17 06:51:33.809635] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.237 [2024-04-17 06:51:33.809642] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2044af0) 00:25:29.237 [2024-04-17 06:51:33.809651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.237 [2024-04-17 06:51:33.809662] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.237 [2024-04-17 06:51:33.809669] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.237 [2024-04-17 06:51:33.809675] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2044af0) 00:25:29.237 [2024-04-17 06:51:33.809683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.237 [2024-04-17 06:51:33.809693] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.237 [2024-04-17 06:51:33.809699] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.237 [2024-04-17 06:51:33.809706] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2044af0) 00:25:29.237 [2024-04-17 06:51:33.809714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.237 [2024-04-17 06:51:33.809724] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.237 [2024-04-17 06:51:33.809730] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.237 [2024-04-17 06:51:33.809737] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2044af0) 00:25:29.237 [2024-04-17 06:51:33.809745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.237 [2024-04-17 06:51:33.809754] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:29.237 [2024-04-17 06:51:33.809773] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:29.237 [2024-04-17 06:51:33.809785] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.237 [2024-04-17 06:51:33.809792] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2044af0) 00:25:29.237 [2024-04-17 06:51:33.809803] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.237 [2024-04-17 06:51:33.809843] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af470, cid 0, qid 0 00:25:29.237 [2024-04-17 06:51:33.809855] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af5d0, cid 1, qid 0 00:25:29.237 [2024-04-17 06:51:33.809862] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af730, cid 2, qid 0 00:25:29.237 [2024-04-17 06:51:33.809869] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af890, cid 3, qid 0 00:25:29.237 [2024-04-17 06:51:33.809877] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af9f0, cid 4, qid 0 00:25:29.237 [2024-04-17 06:51:33.810110] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.237 [2024-04-17 06:51:33.810126] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.237 [2024-04-17 06:51:33.810133] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.237 [2024-04-17 06:51:33.810139] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af9f0) on tqpair=0x2044af0 00:25:29.237 [2024-04-17 06:51:33.810149] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:29.237 [2024-04-17 06:51:33.810158] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:29.237 [2024-04-17 06:51:33.810181] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.237 [2024-04-17 06:51:33.810191] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2044af0) 00:25:29.237 [2024-04-17 06:51:33.810202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.237 [2024-04-17 06:51:33.810223] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af9f0, cid 4, qid 0 00:25:29.237 [2024-04-17 06:51:33.810395] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:29.237 [2024-04-17 06:51:33.810410] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:29.237 [2024-04-17 06:51:33.810417] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:29.237 [2024-04-17 06:51:33.810423] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2044af0): datao=0, datal=4096, cccid=4 00:25:29.238 [2024-04-17 06:51:33.810430] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20af9f0) on tqpair(0x2044af0): expected_datao=0, payload_size=4096 00:25:29.238 [2024-04-17 06:51:33.810438] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.238 [2024-04-17 06:51:33.810454] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:29.238 [2024-04-17 06:51:33.810463] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:29.501 [2024-04-17 06:51:33.855208] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.501 [2024-04-17 06:51:33.855227] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.501 [2024-04-17 06:51:33.855234] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.501 [2024-04-17 06:51:33.855241] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af9f0) on tqpair=0x2044af0 00:25:29.501 [2024-04-17 06:51:33.855261] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:29.501 [2024-04-17 06:51:33.855305] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.501 [2024-04-17 06:51:33.855315] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2044af0) 00:25:29.501 [2024-04-17 06:51:33.855327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.501 [2024-04-17 06:51:33.855338] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.501 [2024-04-17 06:51:33.855345] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.501 [2024-04-17 06:51:33.855352] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2044af0) 00:25:29.501 [2024-04-17 06:51:33.855361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.501 [2024-04-17 06:51:33.855394] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af9f0, cid 4, qid 0 00:25:29.501 [2024-04-17 06:51:33.855407] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20afb50, cid 5, qid 0 00:25:29.501 [2024-04-17 06:51:33.855613] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:29.501 [2024-04-17 06:51:33.855626] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:29.501 [2024-04-17 06:51:33.855632] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:29.501 [2024-04-17 06:51:33.855639] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2044af0): datao=0, datal=1024, cccid=4 00:25:29.501 [2024-04-17 06:51:33.855647] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20af9f0) on tqpair(0x2044af0): expected_datao=0, payload_size=1024 00:25:29.501 [2024-04-17 06:51:33.855654] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.501 [2024-04-17 06:51:33.855664] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:29.501 [2024-04-17 06:51:33.855671] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:29.501 [2024-04-17 06:51:33.855679] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.501 [2024-04-17 06:51:33.855688] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.501 [2024-04-17 06:51:33.855694] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.501 [2024-04-17 06:51:33.855701] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20afb50) on tqpair=0x2044af0 00:25:29.501 [2024-04-17 06:51:33.897326] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.501 [2024-04-17 06:51:33.897347] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.501 [2024-04-17 06:51:33.897354] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.501 [2024-04-17 06:51:33.897361] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af9f0) on tqpair=0x2044af0 00:25:29.501 [2024-04-17 06:51:33.897380] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.501 [2024-04-17 06:51:33.897389] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2044af0) 00:25:29.502 [2024-04-17 06:51:33.897400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.502 [2024-04-17 06:51:33.897430] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af9f0, cid 4, qid 0 00:25:29.502 [2024-04-17 06:51:33.897572] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:29.502 [2024-04-17 06:51:33.897584] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:29.502 [2024-04-17 06:51:33.897591] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:29.502 [2024-04-17 06:51:33.897597] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2044af0): datao=0, datal=3072, cccid=4 00:25:29.502 [2024-04-17 06:51:33.897604] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20af9f0) on tqpair(0x2044af0): expected_datao=0, payload_size=3072 00:25:29.502 [2024-04-17 06:51:33.897612] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.502 [2024-04-17 06:51:33.897631] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:29.502 [2024-04-17 06:51:33.897642] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:29.502 [2024-04-17 06:51:33.897710] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.502 [2024-04-17 06:51:33.897722] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.502 [2024-04-17 06:51:33.897729] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.502 [2024-04-17 06:51:33.897735] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af9f0) on tqpair=0x2044af0 00:25:29.502 [2024-04-17 06:51:33.897751] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.502 [2024-04-17 06:51:33.897760] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2044af0) 00:25:29.502 [2024-04-17 06:51:33.897770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.502 [2024-04-17 06:51:33.897806] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af9f0, cid 4, qid 0 00:25:29.502 [2024-04-17 06:51:33.897951] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:29.502 [2024-04-17 06:51:33.897966] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:29.502 [2024-04-17 06:51:33.897973] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:29.502 [2024-04-17 06:51:33.897979] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2044af0): datao=0, datal=8, cccid=4 00:25:29.502 [2024-04-17 06:51:33.897987] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20af9f0) on tqpair(0x2044af0): expected_datao=0, payload_size=8 00:25:29.502 [2024-04-17 06:51:33.897994] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.502 [2024-04-17 06:51:33.898004] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:29.502 [2024-04-17 06:51:33.898011] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:29.502 [2024-04-17 06:51:33.941191] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.502 [2024-04-17 06:51:33.941210] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.502 [2024-04-17 06:51:33.941218] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.502 [2024-04-17 06:51:33.941225] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af9f0) on tqpair=0x2044af0 00:25:29.502 ===================================================== 00:25:29.502 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:29.502 ===================================================== 00:25:29.502 Controller Capabilities/Features 00:25:29.502 ================================ 00:25:29.502 Vendor ID: 0000 00:25:29.502 Subsystem Vendor ID: 0000 00:25:29.502 Serial Number: .................... 00:25:29.502 Model Number: ........................................ 00:25:29.502 Firmware Version: 24.05 00:25:29.502 Recommended Arb Burst: 0 00:25:29.502 IEEE OUI Identifier: 00 00 00 00:25:29.502 Multi-path I/O 00:25:29.502 May have multiple subsystem ports: No 00:25:29.502 May have multiple controllers: No 00:25:29.502 Associated with SR-IOV VF: No 00:25:29.502 Max Data Transfer Size: 131072 00:25:29.502 Max Number of Namespaces: 0 00:25:29.502 Max Number of I/O Queues: 1024 00:25:29.502 NVMe Specification Version (VS): 1.3 00:25:29.502 NVMe Specification Version (Identify): 1.3 00:25:29.502 Maximum Queue Entries: 128 00:25:29.502 Contiguous Queues Required: Yes 00:25:29.502 Arbitration Mechanisms Supported 00:25:29.502 Weighted Round Robin: Not Supported 00:25:29.502 Vendor Specific: Not Supported 00:25:29.502 Reset Timeout: 15000 ms 00:25:29.502 Doorbell Stride: 4 bytes 00:25:29.502 NVM Subsystem Reset: Not Supported 00:25:29.502 Command Sets Supported 00:25:29.502 NVM Command Set: Supported 00:25:29.502 Boot Partition: Not Supported 00:25:29.502 Memory Page Size Minimum: 4096 bytes 00:25:29.502 Memory Page Size Maximum: 4096 bytes 00:25:29.502 Persistent Memory Region: Not Supported 00:25:29.502 Optional Asynchronous Events Supported 00:25:29.502 Namespace Attribute Notices: Not Supported 00:25:29.502 Firmware Activation Notices: Not Supported 00:25:29.502 ANA Change Notices: Not Supported 00:25:29.502 PLE Aggregate Log Change Notices: Not Supported 00:25:29.502 LBA Status Info Alert Notices: Not Supported 00:25:29.502 EGE Aggregate Log Change Notices: Not Supported 00:25:29.502 Normal NVM Subsystem Shutdown event: Not Supported 00:25:29.502 Zone Descriptor Change Notices: Not Supported 00:25:29.502 Discovery Log Change Notices: Supported 00:25:29.502 Controller Attributes 00:25:29.502 128-bit Host Identifier: Not Supported 00:25:29.502 Non-Operational Permissive Mode: Not Supported 00:25:29.502 NVM Sets: Not Supported 00:25:29.502 Read Recovery Levels: Not Supported 00:25:29.502 Endurance Groups: Not Supported 00:25:29.502 Predictable Latency Mode: Not Supported 00:25:29.502 Traffic Based Keep ALive: Not Supported 00:25:29.502 Namespace Granularity: Not Supported 00:25:29.502 SQ Associations: Not Supported 00:25:29.502 UUID List: Not Supported 00:25:29.502 Multi-Domain Subsystem: Not Supported 00:25:29.502 Fixed Capacity Management: Not Supported 00:25:29.502 Variable Capacity Management: Not Supported 00:25:29.502 Delete Endurance Group: Not Supported 00:25:29.502 Delete NVM Set: Not Supported 00:25:29.502 Extended LBA Formats Supported: Not Supported 00:25:29.502 Flexible Data Placement Supported: Not Supported 00:25:29.502 00:25:29.502 Controller Memory Buffer Support 00:25:29.502 ================================ 00:25:29.502 Supported: No 00:25:29.502 00:25:29.502 Persistent Memory Region Support 00:25:29.502 ================================ 00:25:29.502 Supported: No 00:25:29.502 00:25:29.502 Admin Command Set Attributes 00:25:29.502 ============================ 00:25:29.502 Security Send/Receive: Not Supported 00:25:29.502 Format NVM: Not Supported 00:25:29.502 Firmware Activate/Download: Not Supported 00:25:29.502 Namespace Management: Not Supported 00:25:29.502 Device Self-Test: Not Supported 00:25:29.502 Directives: Not Supported 00:25:29.502 NVMe-MI: Not Supported 00:25:29.502 Virtualization Management: Not Supported 00:25:29.502 Doorbell Buffer Config: Not Supported 00:25:29.502 Get LBA Status Capability: Not Supported 00:25:29.502 Command & Feature Lockdown Capability: Not Supported 00:25:29.503 Abort Command Limit: 1 00:25:29.503 Async Event Request Limit: 4 00:25:29.503 Number of Firmware Slots: N/A 00:25:29.503 Firmware Slot 1 Read-Only: N/A 00:25:29.503 Firmware Activation Without Reset: N/A 00:25:29.503 Multiple Update Detection Support: N/A 00:25:29.503 Firmware Update Granularity: No Information Provided 00:25:29.503 Per-Namespace SMART Log: No 00:25:29.503 Asymmetric Namespace Access Log Page: Not Supported 00:25:29.503 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:29.503 Command Effects Log Page: Not Supported 00:25:29.503 Get Log Page Extended Data: Supported 00:25:29.503 Telemetry Log Pages: Not Supported 00:25:29.503 Persistent Event Log Pages: Not Supported 00:25:29.503 Supported Log Pages Log Page: May Support 00:25:29.503 Commands Supported & Effects Log Page: Not Supported 00:25:29.503 Feature Identifiers & Effects Log Page:May Support 00:25:29.503 NVMe-MI Commands & Effects Log Page: May Support 00:25:29.503 Data Area 4 for Telemetry Log: Not Supported 00:25:29.503 Error Log Page Entries Supported: 128 00:25:29.503 Keep Alive: Not Supported 00:25:29.503 00:25:29.503 NVM Command Set Attributes 00:25:29.503 ========================== 00:25:29.503 Submission Queue Entry Size 00:25:29.503 Max: 1 00:25:29.503 Min: 1 00:25:29.503 Completion Queue Entry Size 00:25:29.503 Max: 1 00:25:29.503 Min: 1 00:25:29.503 Number of Namespaces: 0 00:25:29.503 Compare Command: Not Supported 00:25:29.503 Write Uncorrectable Command: Not Supported 00:25:29.503 Dataset Management Command: Not Supported 00:25:29.503 Write Zeroes Command: Not Supported 00:25:29.503 Set Features Save Field: Not Supported 00:25:29.503 Reservations: Not Supported 00:25:29.503 Timestamp: Not Supported 00:25:29.503 Copy: Not Supported 00:25:29.503 Volatile Write Cache: Not Present 00:25:29.503 Atomic Write Unit (Normal): 1 00:25:29.503 Atomic Write Unit (PFail): 1 00:25:29.503 Atomic Compare & Write Unit: 1 00:25:29.503 Fused Compare & Write: Supported 00:25:29.503 Scatter-Gather List 00:25:29.503 SGL Command Set: Supported 00:25:29.503 SGL Keyed: Supported 00:25:29.503 SGL Bit Bucket Descriptor: Not Supported 00:25:29.503 SGL Metadata Pointer: Not Supported 00:25:29.503 Oversized SGL: Not Supported 00:25:29.503 SGL Metadata Address: Not Supported 00:25:29.503 SGL Offset: Supported 00:25:29.503 Transport SGL Data Block: Not Supported 00:25:29.503 Replay Protected Memory Block: Not Supported 00:25:29.503 00:25:29.503 Firmware Slot Information 00:25:29.503 ========================= 00:25:29.503 Active slot: 0 00:25:29.503 00:25:29.503 00:25:29.503 Error Log 00:25:29.503 ========= 00:25:29.503 00:25:29.503 Active Namespaces 00:25:29.503 ================= 00:25:29.503 Discovery Log Page 00:25:29.503 ================== 00:25:29.503 Generation Counter: 2 00:25:29.503 Number of Records: 2 00:25:29.503 Record Format: 0 00:25:29.503 00:25:29.503 Discovery Log Entry 0 00:25:29.503 ---------------------- 00:25:29.503 Transport Type: 3 (TCP) 00:25:29.503 Address Family: 1 (IPv4) 00:25:29.503 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:29.503 Entry Flags: 00:25:29.503 Duplicate Returned Information: 1 00:25:29.503 Explicit Persistent Connection Support for Discovery: 1 00:25:29.503 Transport Requirements: 00:25:29.503 Secure Channel: Not Required 00:25:29.503 Port ID: 0 (0x0000) 00:25:29.503 Controller ID: 65535 (0xffff) 00:25:29.503 Admin Max SQ Size: 128 00:25:29.503 Transport Service Identifier: 4420 00:25:29.503 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:29.503 Transport Address: 10.0.0.2 00:25:29.503 Discovery Log Entry 1 00:25:29.503 ---------------------- 00:25:29.503 Transport Type: 3 (TCP) 00:25:29.503 Address Family: 1 (IPv4) 00:25:29.503 Subsystem Type: 2 (NVM Subsystem) 00:25:29.503 Entry Flags: 00:25:29.503 Duplicate Returned Information: 0 00:25:29.503 Explicit Persistent Connection Support for Discovery: 0 00:25:29.503 Transport Requirements: 00:25:29.503 Secure Channel: Not Required 00:25:29.503 Port ID: 0 (0x0000) 00:25:29.503 Controller ID: 65535 (0xffff) 00:25:29.503 Admin Max SQ Size: 128 00:25:29.503 Transport Service Identifier: 4420 00:25:29.503 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:29.503 Transport Address: 10.0.0.2 [2024-04-17 06:51:33.941338] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:29.503 [2024-04-17 06:51:33.941363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.503 [2024-04-17 06:51:33.941376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.503 [2024-04-17 06:51:33.941385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.503 [2024-04-17 06:51:33.941394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.503 [2024-04-17 06:51:33.941407] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.503 [2024-04-17 06:51:33.941415] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.503 [2024-04-17 06:51:33.941422] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2044af0) 00:25:29.503 [2024-04-17 06:51:33.941433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.503 [2024-04-17 06:51:33.941458] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af890, cid 3, qid 0 00:25:29.503 [2024-04-17 06:51:33.941580] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.503 [2024-04-17 06:51:33.941592] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.503 [2024-04-17 06:51:33.941599] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.503 [2024-04-17 06:51:33.941606] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af890) on tqpair=0x2044af0 00:25:29.503 [2024-04-17 06:51:33.941618] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.503 [2024-04-17 06:51:33.941626] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.503 [2024-04-17 06:51:33.941632] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2044af0) 00:25:29.503 [2024-04-17 06:51:33.941643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.503 [2024-04-17 06:51:33.941668] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af890, cid 3, qid 0 00:25:29.503 [2024-04-17 06:51:33.941807] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.503 [2024-04-17 06:51:33.941822] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.503 [2024-04-17 06:51:33.941833] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.503 [2024-04-17 06:51:33.941840] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af890) on tqpair=0x2044af0 00:25:29.503 [2024-04-17 06:51:33.941850] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:29.503 [2024-04-17 06:51:33.941858] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:29.503 [2024-04-17 06:51:33.941874] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.503 [2024-04-17 06:51:33.941883] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.503 [2024-04-17 06:51:33.941889] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2044af0) 00:25:29.503 [2024-04-17 06:51:33.941900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.503 [2024-04-17 06:51:33.941920] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af890, cid 3, qid 0 00:25:29.503 [2024-04-17 06:51:33.942058] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.503 [2024-04-17 06:51:33.942073] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.503 [2024-04-17 06:51:33.942080] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.503 [2024-04-17 06:51:33.942086] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af890) on tqpair=0x2044af0 00:25:29.503 [2024-04-17 06:51:33.942105] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.503 [2024-04-17 06:51:33.942113] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.503 [2024-04-17 06:51:33.942120] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2044af0) 00:25:29.503 [2024-04-17 06:51:33.942130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.503 [2024-04-17 06:51:33.942151] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af890, cid 3, qid 0 00:25:29.503 [2024-04-17 06:51:33.942287] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.503 [2024-04-17 06:51:33.942303] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.503 [2024-04-17 06:51:33.942309] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.503 [2024-04-17 06:51:33.942316] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af890) on tqpair=0x2044af0 00:25:29.503 [2024-04-17 06:51:33.942333] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.504 [2024-04-17 06:51:33.942342] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.504 [2024-04-17 06:51:33.942349] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2044af0) 00:25:29.504 [2024-04-17 06:51:33.942359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.504 [2024-04-17 06:51:33.942380] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af890, cid 3, qid 0 00:25:29.504 [2024-04-17 06:51:33.942514] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.504 [2024-04-17 06:51:33.942526] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.504 [2024-04-17 06:51:33.942533] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.504 [2024-04-17 06:51:33.942539] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af890) on tqpair=0x2044af0 00:25:29.504 [2024-04-17 06:51:33.942556] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.504 [2024-04-17 06:51:33.942565] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.504 [2024-04-17 06:51:33.942571] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2044af0) 00:25:29.504 [2024-04-17 06:51:33.942581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.504 [2024-04-17 06:51:33.942601] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af890, cid 3, qid 0 00:25:29.504 [2024-04-17 06:51:33.942720] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.504 [2024-04-17 06:51:33.942736] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.504 [2024-04-17 06:51:33.942742] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.504 [2024-04-17 06:51:33.942749] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af890) on tqpair=0x2044af0 00:25:29.504 [2024-04-17 06:51:33.942766] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.504 [2024-04-17 06:51:33.942776] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.504 [2024-04-17 06:51:33.942782] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2044af0) 00:25:29.504 [2024-04-17 06:51:33.942793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.504 [2024-04-17 06:51:33.942813] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af890, cid 3, qid 0 00:25:29.504 [2024-04-17 06:51:33.942945] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.504 [2024-04-17 06:51:33.942960] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.504 [2024-04-17 06:51:33.942967] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.504 [2024-04-17 06:51:33.942973] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af890) on tqpair=0x2044af0 00:25:29.504 [2024-04-17 06:51:33.942991] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.504 [2024-04-17 06:51:33.943000] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.504 [2024-04-17 06:51:33.943006] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2044af0) 00:25:29.504 [2024-04-17 06:51:33.943016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.504 [2024-04-17 06:51:33.943037] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af890, cid 3, qid 0 00:25:29.504 [2024-04-17 06:51:33.943153] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.504 [2024-04-17 06:51:33.943165] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.504 [2024-04-17 06:51:33.943172] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.504 [2024-04-17 06:51:33.943187] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af890) on tqpair=0x2044af0 00:25:29.504 [2024-04-17 06:51:33.943205] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.504 [2024-04-17 06:51:33.943214] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.504 [2024-04-17 06:51:33.943221] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2044af0) 00:25:29.504 [2024-04-17 06:51:33.943231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.504 [2024-04-17 06:51:33.943252] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af890, cid 3, qid 0 00:25:29.504 [2024-04-17 06:51:33.943371] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.504 [2024-04-17 06:51:33.943386] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.504 [2024-04-17 06:51:33.943393] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.504 [2024-04-17 06:51:33.943399] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af890) on tqpair=0x2044af0 00:25:29.504 [2024-04-17 06:51:33.943416] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.504 [2024-04-17 06:51:33.943426] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.504 [2024-04-17 06:51:33.943432] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2044af0) 00:25:29.504 [2024-04-17 06:51:33.943442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.504 [2024-04-17 06:51:33.943469] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af890, cid 3, qid 0 00:25:29.504 [2024-04-17 06:51:33.943593] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.504 [2024-04-17 06:51:33.943612] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.504 [2024-04-17 06:51:33.943620] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.504 [2024-04-17 06:51:33.943626] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af890) on tqpair=0x2044af0 00:25:29.504 [2024-04-17 06:51:33.943644] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.504 [2024-04-17 06:51:33.943653] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.504 [2024-04-17 06:51:33.943659] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2044af0) 00:25:29.504 [2024-04-17 06:51:33.943670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.504 [2024-04-17 06:51:33.943690] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af890, cid 3, qid 0 00:25:29.504 [2024-04-17 06:51:33.943806] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.504 [2024-04-17 06:51:33.943818] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.504 [2024-04-17 06:51:33.943825] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.504 [2024-04-17 06:51:33.943831] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af890) on tqpair=0x2044af0 00:25:29.504 [2024-04-17 06:51:33.943848] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.504 [2024-04-17 06:51:33.943856] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.504 [2024-04-17 06:51:33.943863] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2044af0) 00:25:29.504 [2024-04-17 06:51:33.943873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.504 [2024-04-17 06:51:33.943893] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af890, cid 3, qid 0 00:25:29.504 [2024-04-17 06:51:33.944020] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.504 [2024-04-17 06:51:33.944031] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.504 [2024-04-17 06:51:33.944038] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.504 [2024-04-17 06:51:33.944044] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af890) on tqpair=0x2044af0 00:25:29.504 [2024-04-17 06:51:33.944061] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.504 [2024-04-17 06:51:33.944069] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.504 [2024-04-17 06:51:33.944076] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2044af0) 00:25:29.504 [2024-04-17 06:51:33.944086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.504 [2024-04-17 06:51:33.944106] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af890, cid 3, qid 0 00:25:29.504 [2024-04-17 06:51:33.944242] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.504 [2024-04-17 06:51:33.944256] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.504 [2024-04-17 06:51:33.944262] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.504 [2024-04-17 06:51:33.944269] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af890) on tqpair=0x2044af0 00:25:29.504 [2024-04-17 06:51:33.944286] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.504 [2024-04-17 06:51:33.944295] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.504 [2024-04-17 06:51:33.944302] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2044af0) 00:25:29.504 [2024-04-17 06:51:33.944312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.504 [2024-04-17 06:51:33.944333] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af890, cid 3, qid 0 00:25:29.504 [2024-04-17 06:51:33.944465] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.504 [2024-04-17 06:51:33.944480] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.504 [2024-04-17 06:51:33.944491] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.504 [2024-04-17 06:51:33.944498] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af890) on tqpair=0x2044af0 00:25:29.504 [2024-04-17 06:51:33.944515] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.504 [2024-04-17 06:51:33.944524] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.505 [2024-04-17 06:51:33.944531] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2044af0) 00:25:29.505 [2024-04-17 06:51:33.944541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.505 [2024-04-17 06:51:33.944562] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af890, cid 3, qid 0 00:25:29.505 [2024-04-17 06:51:33.944678] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.505 [2024-04-17 06:51:33.944690] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.505 [2024-04-17 06:51:33.944697] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.505 [2024-04-17 06:51:33.944703] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af890) on tqpair=0x2044af0 00:25:29.505 [2024-04-17 06:51:33.944720] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.505 [2024-04-17 06:51:33.944729] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.505 [2024-04-17 06:51:33.944735] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2044af0) 00:25:29.505 [2024-04-17 06:51:33.944746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.505 [2024-04-17 06:51:33.944766] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af890, cid 3, qid 0 00:25:29.505 [2024-04-17 06:51:33.944887] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.505 [2024-04-17 06:51:33.944902] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.505 [2024-04-17 06:51:33.944908] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.505 [2024-04-17 06:51:33.944915] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af890) on tqpair=0x2044af0 00:25:29.505 [2024-04-17 06:51:33.944932] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.505 [2024-04-17 06:51:33.944941] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.505 [2024-04-17 06:51:33.944948] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2044af0) 00:25:29.505 [2024-04-17 06:51:33.944958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.505 [2024-04-17 06:51:33.944978] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af890, cid 3, qid 0 00:25:29.505 [2024-04-17 06:51:33.945095] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.505 [2024-04-17 06:51:33.945110] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.505 [2024-04-17 06:51:33.945117] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.505 [2024-04-17 06:51:33.945123] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af890) on tqpair=0x2044af0 00:25:29.505 [2024-04-17 06:51:33.945141] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.505 [2024-04-17 06:51:33.945150] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.505 [2024-04-17 06:51:33.945156] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2044af0) 00:25:29.505 [2024-04-17 06:51:33.945167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.505 [2024-04-17 06:51:33.949199] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20af890, cid 3, qid 0 00:25:29.505 [2024-04-17 06:51:33.949359] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.505 [2024-04-17 06:51:33.949374] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.505 [2024-04-17 06:51:33.949381] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.505 [2024-04-17 06:51:33.949391] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20af890) on tqpair=0x2044af0 00:25:29.505 [2024-04-17 06:51:33.949407] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:25:29.505 00:25:29.505 06:51:33 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:29.505 [2024-04-17 06:51:33.980093] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:25:29.505 [2024-04-17 06:51:33.980134] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68581 ] 00:25:29.505 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.505 [2024-04-17 06:51:34.014195] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:29.505 [2024-04-17 06:51:34.014253] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:29.505 [2024-04-17 06:51:34.014263] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:29.505 [2024-04-17 06:51:34.014278] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:29.505 [2024-04-17 06:51:34.014290] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:29.505 [2024-04-17 06:51:34.014498] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:29.505 [2024-04-17 06:51:34.014537] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x7a8af0 0 00:25:29.505 [2024-04-17 06:51:34.029194] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:29.505 [2024-04-17 06:51:34.029214] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:29.505 [2024-04-17 06:51:34.029222] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:29.505 [2024-04-17 06:51:34.029228] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:29.505 [2024-04-17 06:51:34.029267] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.505 [2024-04-17 06:51:34.029278] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.505 [2024-04-17 06:51:34.029285] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7a8af0) 00:25:29.505 [2024-04-17 06:51:34.029299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:29.505 [2024-04-17 06:51:34.029325] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x813470, cid 0, qid 0 00:25:29.505 [2024-04-17 06:51:34.037193] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.505 [2024-04-17 06:51:34.037212] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.505 [2024-04-17 06:51:34.037219] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.505 [2024-04-17 06:51:34.037226] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x813470) on tqpair=0x7a8af0 00:25:29.505 [2024-04-17 06:51:34.037239] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:29.505 [2024-04-17 06:51:34.037249] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:29.505 [2024-04-17 06:51:34.037258] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:29.505 [2024-04-17 06:51:34.037276] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.505 [2024-04-17 06:51:34.037284] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.505 [2024-04-17 06:51:34.037294] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7a8af0) 00:25:29.505 [2024-04-17 06:51:34.037307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.505 [2024-04-17 06:51:34.037330] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x813470, cid 0, qid 0 00:25:29.505 [2024-04-17 06:51:34.037509] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.505 [2024-04-17 06:51:34.037524] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.505 [2024-04-17 06:51:34.037530] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.505 [2024-04-17 06:51:34.037537] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x813470) on tqpair=0x7a8af0 00:25:29.505 [2024-04-17 06:51:34.037545] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:29.505 [2024-04-17 06:51:34.037558] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:29.505 [2024-04-17 06:51:34.037571] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.505 [2024-04-17 06:51:34.037578] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.505 [2024-04-17 06:51:34.037585] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7a8af0) 00:25:29.505 [2024-04-17 06:51:34.037595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.505 [2024-04-17 06:51:34.037617] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x813470, cid 0, qid 0 00:25:29.505 [2024-04-17 06:51:34.037859] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.505 [2024-04-17 06:51:34.037875] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.505 [2024-04-17 06:51:34.037882] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.505 [2024-04-17 06:51:34.037888] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x813470) on tqpair=0x7a8af0 00:25:29.505 [2024-04-17 06:51:34.037897] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:29.506 [2024-04-17 06:51:34.037911] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:29.506 [2024-04-17 06:51:34.037923] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.506 [2024-04-17 06:51:34.037930] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.506 [2024-04-17 06:51:34.037936] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7a8af0) 00:25:29.506 [2024-04-17 06:51:34.037947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.506 [2024-04-17 06:51:34.037968] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x813470, cid 0, qid 0 00:25:29.506 [2024-04-17 06:51:34.038143] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.506 [2024-04-17 06:51:34.038158] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.506 [2024-04-17 06:51:34.038164] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.506 [2024-04-17 06:51:34.038171] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x813470) on tqpair=0x7a8af0 00:25:29.506 [2024-04-17 06:51:34.038188] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:29.506 [2024-04-17 06:51:34.038205] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.506 [2024-04-17 06:51:34.038215] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.506 [2024-04-17 06:51:34.038221] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7a8af0) 00:25:29.506 [2024-04-17 06:51:34.038232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.506 [2024-04-17 06:51:34.038254] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x813470, cid 0, qid 0 00:25:29.506 [2024-04-17 06:51:34.038382] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.506 [2024-04-17 06:51:34.038397] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.506 [2024-04-17 06:51:34.038404] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.506 [2024-04-17 06:51:34.038411] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x813470) on tqpair=0x7a8af0 00:25:29.506 [2024-04-17 06:51:34.038418] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:29.506 [2024-04-17 06:51:34.038426] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:29.506 [2024-04-17 06:51:34.038439] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:29.506 [2024-04-17 06:51:34.038561] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:29.506 [2024-04-17 06:51:34.038569] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:29.506 [2024-04-17 06:51:34.038580] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.506 [2024-04-17 06:51:34.038588] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.506 [2024-04-17 06:51:34.038593] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7a8af0) 00:25:29.506 [2024-04-17 06:51:34.038604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.506 [2024-04-17 06:51:34.038640] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x813470, cid 0, qid 0 00:25:29.506 [2024-04-17 06:51:34.038863] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.506 [2024-04-17 06:51:34.038879] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.506 [2024-04-17 06:51:34.038886] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.506 [2024-04-17 06:51:34.038892] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x813470) on tqpair=0x7a8af0 00:25:29.506 [2024-04-17 06:51:34.038900] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:29.506 [2024-04-17 06:51:34.038917] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.506 [2024-04-17 06:51:34.038926] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.506 [2024-04-17 06:51:34.038932] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7a8af0) 00:25:29.506 [2024-04-17 06:51:34.038942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.506 [2024-04-17 06:51:34.038963] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x813470, cid 0, qid 0 00:25:29.506 [2024-04-17 06:51:34.039087] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.506 [2024-04-17 06:51:34.039102] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.506 [2024-04-17 06:51:34.039108] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.506 [2024-04-17 06:51:34.039115] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x813470) on tqpair=0x7a8af0 00:25:29.506 [2024-04-17 06:51:34.039122] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:29.506 [2024-04-17 06:51:34.039130] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:29.506 [2024-04-17 06:51:34.039143] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:29.506 [2024-04-17 06:51:34.039157] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:29.506 [2024-04-17 06:51:34.039185] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.506 [2024-04-17 06:51:34.039196] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7a8af0) 00:25:29.506 [2024-04-17 06:51:34.039207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.506 [2024-04-17 06:51:34.039229] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x813470, cid 0, qid 0 00:25:29.506 [2024-04-17 06:51:34.039391] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:29.506 [2024-04-17 06:51:34.039406] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:29.506 [2024-04-17 06:51:34.039413] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:29.506 [2024-04-17 06:51:34.039419] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7a8af0): datao=0, datal=4096, cccid=0 00:25:29.506 [2024-04-17 06:51:34.039427] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x813470) on tqpair(0x7a8af0): expected_datao=0, payload_size=4096 00:25:29.506 [2024-04-17 06:51:34.039434] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.506 [2024-04-17 06:51:34.039465] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:29.506 [2024-04-17 06:51:34.039474] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:29.506 [2024-04-17 06:51:34.081423] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.506 [2024-04-17 06:51:34.081443] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.506 [2024-04-17 06:51:34.081450] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.506 [2024-04-17 06:51:34.081457] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x813470) on tqpair=0x7a8af0 00:25:29.506 [2024-04-17 06:51:34.081468] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:29.506 [2024-04-17 06:51:34.081477] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:29.506 [2024-04-17 06:51:34.081484] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:29.506 [2024-04-17 06:51:34.081491] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:29.506 [2024-04-17 06:51:34.081498] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:29.506 [2024-04-17 06:51:34.081506] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:29.506 [2024-04-17 06:51:34.081520] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:29.506 [2024-04-17 06:51:34.081532] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.506 [2024-04-17 06:51:34.081540] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.506 [2024-04-17 06:51:34.081546] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7a8af0) 00:25:29.506 [2024-04-17 06:51:34.081558] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:29.506 [2024-04-17 06:51:34.081580] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x813470, cid 0, qid 0 00:25:29.506 [2024-04-17 06:51:34.081720] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.506 [2024-04-17 06:51:34.081732] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.506 [2024-04-17 06:51:34.081739] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.506 [2024-04-17 06:51:34.081745] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x813470) on tqpair=0x7a8af0 00:25:29.506 [2024-04-17 06:51:34.081755] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.506 [2024-04-17 06:51:34.081763] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.506 [2024-04-17 06:51:34.081773] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7a8af0) 00:25:29.507 [2024-04-17 06:51:34.081784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.507 [2024-04-17 06:51:34.081794] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.507 [2024-04-17 06:51:34.081801] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.507 [2024-04-17 06:51:34.081807] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x7a8af0) 00:25:29.507 [2024-04-17 06:51:34.081816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.507 [2024-04-17 06:51:34.081826] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.507 [2024-04-17 06:51:34.081832] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.507 [2024-04-17 06:51:34.081839] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x7a8af0) 00:25:29.507 [2024-04-17 06:51:34.081847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.507 [2024-04-17 06:51:34.081857] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.507 [2024-04-17 06:51:34.081880] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.507 [2024-04-17 06:51:34.081886] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7a8af0) 00:25:29.507 [2024-04-17 06:51:34.081894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.507 [2024-04-17 06:51:34.081903] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:29.507 [2024-04-17 06:51:34.081921] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:29.507 [2024-04-17 06:51:34.081933] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.507 [2024-04-17 06:51:34.081954] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7a8af0) 00:25:29.507 [2024-04-17 06:51:34.081965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.507 [2024-04-17 06:51:34.081986] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x813470, cid 0, qid 0 00:25:29.507 [2024-04-17 06:51:34.081997] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8135d0, cid 1, qid 0 00:25:29.507 [2024-04-17 06:51:34.082018] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x813730, cid 2, qid 0 00:25:29.507 [2024-04-17 06:51:34.082027] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x813890, cid 3, qid 0 00:25:29.507 [2024-04-17 06:51:34.082034] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8139f0, cid 4, qid 0 00:25:29.507 [2024-04-17 06:51:34.082234] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.507 [2024-04-17 06:51:34.082250] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.507 [2024-04-17 06:51:34.082256] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.507 [2024-04-17 06:51:34.082263] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8139f0) on tqpair=0x7a8af0 00:25:29.507 [2024-04-17 06:51:34.082271] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:29.507 [2024-04-17 06:51:34.082280] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:29.507 [2024-04-17 06:51:34.082298] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:29.507 [2024-04-17 06:51:34.082309] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:29.507 [2024-04-17 06:51:34.082323] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.507 [2024-04-17 06:51:34.082331] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.507 [2024-04-17 06:51:34.082337] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7a8af0) 00:25:29.507 [2024-04-17 06:51:34.082348] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:29.507 [2024-04-17 06:51:34.082370] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8139f0, cid 4, qid 0 00:25:29.507 [2024-04-17 06:51:34.082545] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.507 [2024-04-17 06:51:34.082561] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.507 [2024-04-17 06:51:34.082567] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.507 [2024-04-17 06:51:34.082574] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8139f0) on tqpair=0x7a8af0 00:25:29.507 [2024-04-17 06:51:34.082627] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:29.507 [2024-04-17 06:51:34.082644] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:29.507 [2024-04-17 06:51:34.082658] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.507 [2024-04-17 06:51:34.082666] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7a8af0) 00:25:29.507 [2024-04-17 06:51:34.082676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.507 [2024-04-17 06:51:34.082712] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8139f0, cid 4, qid 0 00:25:29.507 [2024-04-17 06:51:34.082921] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:29.507 [2024-04-17 06:51:34.082937] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:29.507 [2024-04-17 06:51:34.082943] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:29.507 [2024-04-17 06:51:34.082950] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7a8af0): datao=0, datal=4096, cccid=4 00:25:29.507 [2024-04-17 06:51:34.082957] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8139f0) on tqpair(0x7a8af0): expected_datao=0, payload_size=4096 00:25:29.507 [2024-04-17 06:51:34.082965] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.507 [2024-04-17 06:51:34.082975] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:29.507 [2024-04-17 06:51:34.082982] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:29.507 [2024-04-17 06:51:34.083061] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.507 [2024-04-17 06:51:34.083073] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.507 [2024-04-17 06:51:34.083080] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.507 [2024-04-17 06:51:34.083086] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8139f0) on tqpair=0x7a8af0 00:25:29.507 [2024-04-17 06:51:34.083100] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:29.507 [2024-04-17 06:51:34.083120] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:29.507 [2024-04-17 06:51:34.083136] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:29.507 [2024-04-17 06:51:34.083149] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.508 [2024-04-17 06:51:34.083157] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7a8af0) 00:25:29.508 [2024-04-17 06:51:34.083168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.508 [2024-04-17 06:51:34.087202] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8139f0, cid 4, qid 0 00:25:29.508 [2024-04-17 06:51:34.087398] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:29.508 [2024-04-17 06:51:34.087413] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:29.508 [2024-04-17 06:51:34.087420] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:29.508 [2024-04-17 06:51:34.087427] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7a8af0): datao=0, datal=4096, cccid=4 00:25:29.508 [2024-04-17 06:51:34.087434] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8139f0) on tqpair(0x7a8af0): expected_datao=0, payload_size=4096 00:25:29.508 [2024-04-17 06:51:34.087441] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.508 [2024-04-17 06:51:34.087452] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:29.508 [2024-04-17 06:51:34.087459] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:29.508 [2024-04-17 06:51:34.087497] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.508 [2024-04-17 06:51:34.087508] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.508 [2024-04-17 06:51:34.087514] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.508 [2024-04-17 06:51:34.087521] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8139f0) on tqpair=0x7a8af0 00:25:29.508 [2024-04-17 06:51:34.087540] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:29.508 [2024-04-17 06:51:34.087559] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:29.508 [2024-04-17 06:51:34.087572] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.508 [2024-04-17 06:51:34.087580] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7a8af0) 00:25:29.508 [2024-04-17 06:51:34.087591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.508 [2024-04-17 06:51:34.087612] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8139f0, cid 4, qid 0 00:25:29.508 [2024-04-17 06:51:34.087755] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:29.508 [2024-04-17 06:51:34.087771] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:29.508 [2024-04-17 06:51:34.087777] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:29.508 [2024-04-17 06:51:34.087783] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7a8af0): datao=0, datal=4096, cccid=4 00:25:29.508 [2024-04-17 06:51:34.087791] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8139f0) on tqpair(0x7a8af0): expected_datao=0, payload_size=4096 00:25:29.508 [2024-04-17 06:51:34.087798] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.508 [2024-04-17 06:51:34.087807] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:29.508 [2024-04-17 06:51:34.087815] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:29.508 [2024-04-17 06:51:34.087887] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.508 [2024-04-17 06:51:34.087902] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.508 [2024-04-17 06:51:34.087908] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.508 [2024-04-17 06:51:34.087915] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8139f0) on tqpair=0x7a8af0 00:25:29.508 [2024-04-17 06:51:34.087927] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:29.508 [2024-04-17 06:51:34.087941] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:29.508 [2024-04-17 06:51:34.087957] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:29.508 [2024-04-17 06:51:34.087967] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:29.508 [2024-04-17 06:51:34.087979] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:29.508 [2024-04-17 06:51:34.087988] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:29.508 [2024-04-17 06:51:34.087995] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:29.508 [2024-04-17 06:51:34.088004] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:29.508 [2024-04-17 06:51:34.088023] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.508 [2024-04-17 06:51:34.088047] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7a8af0) 00:25:29.508 [2024-04-17 06:51:34.088058] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.508 [2024-04-17 06:51:34.088069] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.508 [2024-04-17 06:51:34.088076] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.508 [2024-04-17 06:51:34.088082] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7a8af0) 00:25:29.508 [2024-04-17 06:51:34.088105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.508 [2024-04-17 06:51:34.088129] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8139f0, cid 4, qid 0 00:25:29.508 [2024-04-17 06:51:34.088140] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x813b50, cid 5, qid 0 00:25:29.508 [2024-04-17 06:51:34.088364] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.508 [2024-04-17 06:51:34.088380] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.508 [2024-04-17 06:51:34.088387] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.508 [2024-04-17 06:51:34.088393] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8139f0) on tqpair=0x7a8af0 00:25:29.508 [2024-04-17 06:51:34.088404] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.508 [2024-04-17 06:51:34.088412] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.508 [2024-04-17 06:51:34.088418] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.508 [2024-04-17 06:51:34.088425] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x813b50) on tqpair=0x7a8af0 00:25:29.508 [2024-04-17 06:51:34.088441] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.508 [2024-04-17 06:51:34.088449] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7a8af0) 00:25:29.508 [2024-04-17 06:51:34.088460] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.508 [2024-04-17 06:51:34.088496] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x813b50, cid 5, qid 0 00:25:29.508 [2024-04-17 06:51:34.088725] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.508 [2024-04-17 06:51:34.088740] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.508 [2024-04-17 06:51:34.088747] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.508 [2024-04-17 06:51:34.088753] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x813b50) on tqpair=0x7a8af0 00:25:29.508 [2024-04-17 06:51:34.088769] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.508 [2024-04-17 06:51:34.088778] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7a8af0) 00:25:29.508 [2024-04-17 06:51:34.088789] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.508 [2024-04-17 06:51:34.088809] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x813b50, cid 5, qid 0 00:25:29.508 [2024-04-17 06:51:34.088937] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.508 [2024-04-17 06:51:34.088956] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.508 [2024-04-17 06:51:34.088963] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.508 [2024-04-17 06:51:34.088970] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x813b50) on tqpair=0x7a8af0 00:25:29.508 [2024-04-17 06:51:34.088985] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.508 [2024-04-17 06:51:34.088994] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7a8af0) 00:25:29.508 [2024-04-17 06:51:34.089005] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.508 [2024-04-17 06:51:34.089025] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x813b50, cid 5, qid 0 00:25:29.508 [2024-04-17 06:51:34.089188] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.508 [2024-04-17 06:51:34.089202] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.508 [2024-04-17 06:51:34.089209] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.508 [2024-04-17 06:51:34.089215] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x813b50) on tqpair=0x7a8af0 00:25:29.508 [2024-04-17 06:51:34.089233] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.508 [2024-04-17 06:51:34.089243] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7a8af0) 00:25:29.508 [2024-04-17 06:51:34.089254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.508 [2024-04-17 06:51:34.089265] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.508 [2024-04-17 06:51:34.089273] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7a8af0) 00:25:29.508 [2024-04-17 06:51:34.089282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.508 [2024-04-17 06:51:34.089293] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.508 [2024-04-17 06:51:34.089301] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x7a8af0) 00:25:29.508 [2024-04-17 06:51:34.089310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.509 [2024-04-17 06:51:34.089321] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.509 [2024-04-17 06:51:34.089329] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x7a8af0) 00:25:29.509 [2024-04-17 06:51:34.089338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.509 [2024-04-17 06:51:34.089374] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x813b50, cid 5, qid 0 00:25:29.509 [2024-04-17 06:51:34.089385] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8139f0, cid 4, qid 0 00:25:29.509 [2024-04-17 06:51:34.089393] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x813cb0, cid 6, qid 0 00:25:29.509 [2024-04-17 06:51:34.089400] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x813e10, cid 7, qid 0 00:25:29.509 [2024-04-17 06:51:34.089687] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:29.509 [2024-04-17 06:51:34.089703] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:29.509 [2024-04-17 06:51:34.089710] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:29.509 [2024-04-17 06:51:34.089716] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7a8af0): datao=0, datal=8192, cccid=5 00:25:29.509 [2024-04-17 06:51:34.089723] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x813b50) on tqpair(0x7a8af0): expected_datao=0, payload_size=8192 00:25:29.509 [2024-04-17 06:51:34.089731] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.509 [2024-04-17 06:51:34.089865] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:29.509 [2024-04-17 06:51:34.089877] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:29.509 [2024-04-17 06:51:34.089886] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:29.509 [2024-04-17 06:51:34.089895] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:29.509 [2024-04-17 06:51:34.089901] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:29.509 [2024-04-17 06:51:34.089907] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7a8af0): datao=0, datal=512, cccid=4 00:25:29.509 [2024-04-17 06:51:34.089915] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8139f0) on tqpair(0x7a8af0): expected_datao=0, payload_size=512 00:25:29.509 [2024-04-17 06:51:34.089922] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.509 [2024-04-17 06:51:34.089931] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:29.509 [2024-04-17 06:51:34.089938] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:29.509 [2024-04-17 06:51:34.089946] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:29.509 [2024-04-17 06:51:34.089954] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:29.509 [2024-04-17 06:51:34.089960] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:29.509 [2024-04-17 06:51:34.089967] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7a8af0): datao=0, datal=512, cccid=6 00:25:29.509 [2024-04-17 06:51:34.089974] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x813cb0) on tqpair(0x7a8af0): expected_datao=0, payload_size=512 00:25:29.509 [2024-04-17 06:51:34.089981] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.509 [2024-04-17 06:51:34.089990] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:29.509 [2024-04-17 06:51:34.089996] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:29.509 [2024-04-17 06:51:34.090004] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:29.509 [2024-04-17 06:51:34.090013] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:29.509 [2024-04-17 06:51:34.090019] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:29.509 [2024-04-17 06:51:34.090025] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7a8af0): datao=0, datal=4096, cccid=7 00:25:29.509 [2024-04-17 06:51:34.090032] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x813e10) on tqpair(0x7a8af0): expected_datao=0, payload_size=4096 00:25:29.509 [2024-04-17 06:51:34.090040] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.509 [2024-04-17 06:51:34.090049] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:29.509 [2024-04-17 06:51:34.090056] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:29.509 [2024-04-17 06:51:34.090067] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.509 [2024-04-17 06:51:34.090076] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.509 [2024-04-17 06:51:34.090082] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.509 [2024-04-17 06:51:34.090089] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x813b50) on tqpair=0x7a8af0 00:25:29.509 [2024-04-17 06:51:34.090108] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.509 [2024-04-17 06:51:34.090119] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.509 [2024-04-17 06:51:34.090125] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.509 [2024-04-17 06:51:34.090132] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8139f0) on tqpair=0x7a8af0 00:25:29.509 [2024-04-17 06:51:34.090160] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.509 [2024-04-17 06:51:34.090170] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.509 [2024-04-17 06:51:34.090184] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.509 [2024-04-17 06:51:34.090191] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x813cb0) on tqpair=0x7a8af0 00:25:29.509 [2024-04-17 06:51:34.090216] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.509 [2024-04-17 06:51:34.090229] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.509 [2024-04-17 06:51:34.090236] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.509 [2024-04-17 06:51:34.090243] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x813e10) on tqpair=0x7a8af0 00:25:29.509 ===================================================== 00:25:29.509 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:29.509 ===================================================== 00:25:29.509 Controller Capabilities/Features 00:25:29.509 ================================ 00:25:29.509 Vendor ID: 8086 00:25:29.509 Subsystem Vendor ID: 8086 00:25:29.509 Serial Number: SPDK00000000000001 00:25:29.509 Model Number: SPDK bdev Controller 00:25:29.509 Firmware Version: 24.05 00:25:29.509 Recommended Arb Burst: 6 00:25:29.509 IEEE OUI Identifier: e4 d2 5c 00:25:29.509 Multi-path I/O 00:25:29.509 May have multiple subsystem ports: Yes 00:25:29.509 May have multiple controllers: Yes 00:25:29.509 Associated with SR-IOV VF: No 00:25:29.509 Max Data Transfer Size: 131072 00:25:29.509 Max Number of Namespaces: 32 00:25:29.509 Max Number of I/O Queues: 127 00:25:29.509 NVMe Specification Version (VS): 1.3 00:25:29.509 NVMe Specification Version (Identify): 1.3 00:25:29.509 Maximum Queue Entries: 128 00:25:29.509 Contiguous Queues Required: Yes 00:25:29.509 Arbitration Mechanisms Supported 00:25:29.509 Weighted Round Robin: Not Supported 00:25:29.509 Vendor Specific: Not Supported 00:25:29.509 Reset Timeout: 15000 ms 00:25:29.509 Doorbell Stride: 4 bytes 00:25:29.509 NVM Subsystem Reset: Not Supported 00:25:29.509 Command Sets Supported 00:25:29.509 NVM Command Set: Supported 00:25:29.509 Boot Partition: Not Supported 00:25:29.509 Memory Page Size Minimum: 4096 bytes 00:25:29.509 Memory Page Size Maximum: 4096 bytes 00:25:29.509 Persistent Memory Region: Not Supported 00:25:29.509 Optional Asynchronous Events Supported 00:25:29.509 Namespace Attribute Notices: Supported 00:25:29.509 Firmware Activation Notices: Not Supported 00:25:29.509 ANA Change Notices: Not Supported 00:25:29.509 PLE Aggregate Log Change Notices: Not Supported 00:25:29.509 LBA Status Info Alert Notices: Not Supported 00:25:29.509 EGE Aggregate Log Change Notices: Not Supported 00:25:29.509 Normal NVM Subsystem Shutdown event: Not Supported 00:25:29.509 Zone Descriptor Change Notices: Not Supported 00:25:29.509 Discovery Log Change Notices: Not Supported 00:25:29.509 Controller Attributes 00:25:29.509 128-bit Host Identifier: Supported 00:25:29.509 Non-Operational Permissive Mode: Not Supported 00:25:29.509 NVM Sets: Not Supported 00:25:29.509 Read Recovery Levels: Not Supported 00:25:29.509 Endurance Groups: Not Supported 00:25:29.509 Predictable Latency Mode: Not Supported 00:25:29.509 Traffic Based Keep ALive: Not Supported 00:25:29.509 Namespace Granularity: Not Supported 00:25:29.509 SQ Associations: Not Supported 00:25:29.509 UUID List: Not Supported 00:25:29.509 Multi-Domain Subsystem: Not Supported 00:25:29.509 Fixed Capacity Management: Not Supported 00:25:29.509 Variable Capacity Management: Not Supported 00:25:29.509 Delete Endurance Group: Not Supported 00:25:29.509 Delete NVM Set: Not Supported 00:25:29.509 Extended LBA Formats Supported: Not Supported 00:25:29.509 Flexible Data Placement Supported: Not Supported 00:25:29.509 00:25:29.509 Controller Memory Buffer Support 00:25:29.509 ================================ 00:25:29.509 Supported: No 00:25:29.509 00:25:29.509 Persistent Memory Region Support 00:25:29.509 ================================ 00:25:29.509 Supported: No 00:25:29.509 00:25:29.509 Admin Command Set Attributes 00:25:29.509 ============================ 00:25:29.509 Security Send/Receive: Not Supported 00:25:29.509 Format NVM: Not Supported 00:25:29.509 Firmware Activate/Download: Not Supported 00:25:29.509 Namespace Management: Not Supported 00:25:29.509 Device Self-Test: Not Supported 00:25:29.509 Directives: Not Supported 00:25:29.509 NVMe-MI: Not Supported 00:25:29.509 Virtualization Management: Not Supported 00:25:29.509 Doorbell Buffer Config: Not Supported 00:25:29.509 Get LBA Status Capability: Not Supported 00:25:29.510 Command & Feature Lockdown Capability: Not Supported 00:25:29.510 Abort Command Limit: 4 00:25:29.510 Async Event Request Limit: 4 00:25:29.510 Number of Firmware Slots: N/A 00:25:29.510 Firmware Slot 1 Read-Only: N/A 00:25:29.510 Firmware Activation Without Reset: N/A 00:25:29.510 Multiple Update Detection Support: N/A 00:25:29.510 Firmware Update Granularity: No Information Provided 00:25:29.510 Per-Namespace SMART Log: No 00:25:29.510 Asymmetric Namespace Access Log Page: Not Supported 00:25:29.510 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:29.510 Command Effects Log Page: Supported 00:25:29.510 Get Log Page Extended Data: Supported 00:25:29.510 Telemetry Log Pages: Not Supported 00:25:29.510 Persistent Event Log Pages: Not Supported 00:25:29.510 Supported Log Pages Log Page: May Support 00:25:29.510 Commands Supported & Effects Log Page: Not Supported 00:25:29.510 Feature Identifiers & Effects Log Page:May Support 00:25:29.510 NVMe-MI Commands & Effects Log Page: May Support 00:25:29.510 Data Area 4 for Telemetry Log: Not Supported 00:25:29.510 Error Log Page Entries Supported: 128 00:25:29.510 Keep Alive: Supported 00:25:29.510 Keep Alive Granularity: 10000 ms 00:25:29.510 00:25:29.510 NVM Command Set Attributes 00:25:29.510 ========================== 00:25:29.510 Submission Queue Entry Size 00:25:29.510 Max: 64 00:25:29.510 Min: 64 00:25:29.510 Completion Queue Entry Size 00:25:29.510 Max: 16 00:25:29.510 Min: 16 00:25:29.510 Number of Namespaces: 32 00:25:29.510 Compare Command: Supported 00:25:29.510 Write Uncorrectable Command: Not Supported 00:25:29.510 Dataset Management Command: Supported 00:25:29.510 Write Zeroes Command: Supported 00:25:29.510 Set Features Save Field: Not Supported 00:25:29.510 Reservations: Supported 00:25:29.510 Timestamp: Not Supported 00:25:29.510 Copy: Supported 00:25:29.510 Volatile Write Cache: Present 00:25:29.510 Atomic Write Unit (Normal): 1 00:25:29.510 Atomic Write Unit (PFail): 1 00:25:29.510 Atomic Compare & Write Unit: 1 00:25:29.510 Fused Compare & Write: Supported 00:25:29.510 Scatter-Gather List 00:25:29.510 SGL Command Set: Supported 00:25:29.510 SGL Keyed: Supported 00:25:29.510 SGL Bit Bucket Descriptor: Not Supported 00:25:29.510 SGL Metadata Pointer: Not Supported 00:25:29.510 Oversized SGL: Not Supported 00:25:29.510 SGL Metadata Address: Not Supported 00:25:29.510 SGL Offset: Supported 00:25:29.510 Transport SGL Data Block: Not Supported 00:25:29.510 Replay Protected Memory Block: Not Supported 00:25:29.510 00:25:29.510 Firmware Slot Information 00:25:29.510 ========================= 00:25:29.510 Active slot: 1 00:25:29.510 Slot 1 Firmware Revision: 24.05 00:25:29.510 00:25:29.510 00:25:29.510 Commands Supported and Effects 00:25:29.510 ============================== 00:25:29.510 Admin Commands 00:25:29.510 -------------- 00:25:29.510 Get Log Page (02h): Supported 00:25:29.510 Identify (06h): Supported 00:25:29.510 Abort (08h): Supported 00:25:29.510 Set Features (09h): Supported 00:25:29.510 Get Features (0Ah): Supported 00:25:29.510 Asynchronous Event Request (0Ch): Supported 00:25:29.510 Keep Alive (18h): Supported 00:25:29.510 I/O Commands 00:25:29.510 ------------ 00:25:29.510 Flush (00h): Supported LBA-Change 00:25:29.510 Write (01h): Supported LBA-Change 00:25:29.510 Read (02h): Supported 00:25:29.510 Compare (05h): Supported 00:25:29.510 Write Zeroes (08h): Supported LBA-Change 00:25:29.510 Dataset Management (09h): Supported LBA-Change 00:25:29.510 Copy (19h): Supported LBA-Change 00:25:29.510 Unknown (79h): Supported LBA-Change 00:25:29.510 Unknown (7Ah): Supported 00:25:29.510 00:25:29.510 Error Log 00:25:29.510 ========= 00:25:29.510 00:25:29.510 Arbitration 00:25:29.510 =========== 00:25:29.510 Arbitration Burst: 1 00:25:29.510 00:25:29.510 Power Management 00:25:29.510 ================ 00:25:29.510 Number of Power States: 1 00:25:29.510 Current Power State: Power State #0 00:25:29.510 Power State #0: 00:25:29.510 Max Power: 0.00 W 00:25:29.510 Non-Operational State: Operational 00:25:29.510 Entry Latency: Not Reported 00:25:29.510 Exit Latency: Not Reported 00:25:29.510 Relative Read Throughput: 0 00:25:29.510 Relative Read Latency: 0 00:25:29.510 Relative Write Throughput: 0 00:25:29.510 Relative Write Latency: 0 00:25:29.510 Idle Power: Not Reported 00:25:29.510 Active Power: Not Reported 00:25:29.510 Non-Operational Permissive Mode: Not Supported 00:25:29.510 00:25:29.510 Health Information 00:25:29.510 ================== 00:25:29.510 Critical Warnings: 00:25:29.510 Available Spare Space: OK 00:25:29.510 Temperature: OK 00:25:29.510 Device Reliability: OK 00:25:29.510 Read Only: No 00:25:29.510 Volatile Memory Backup: OK 00:25:29.510 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:29.510 Temperature Threshold: [2024-04-17 06:51:34.090379] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.510 [2024-04-17 06:51:34.090391] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x7a8af0) 00:25:29.510 [2024-04-17 06:51:34.090401] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.510 [2024-04-17 06:51:34.090423] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x813e10, cid 7, qid 0 00:25:29.510 [2024-04-17 06:51:34.090615] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.510 [2024-04-17 06:51:34.090630] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.510 [2024-04-17 06:51:34.090637] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.510 [2024-04-17 06:51:34.090643] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x813e10) on tqpair=0x7a8af0 00:25:29.510 [2024-04-17 06:51:34.090686] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:29.510 [2024-04-17 06:51:34.090707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.510 [2024-04-17 06:51:34.090718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.510 [2024-04-17 06:51:34.090728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.510 [2024-04-17 06:51:34.090737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.510 [2024-04-17 06:51:34.090750] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.510 [2024-04-17 06:51:34.090758] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.510 [2024-04-17 06:51:34.090764] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7a8af0) 00:25:29.510 [2024-04-17 06:51:34.090775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.510 [2024-04-17 06:51:34.090797] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x813890, cid 3, qid 0 00:25:29.510 [2024-04-17 06:51:34.090972] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.510 [2024-04-17 06:51:34.090988] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.510 [2024-04-17 06:51:34.090994] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.510 [2024-04-17 06:51:34.091001] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x813890) on tqpair=0x7a8af0 00:25:29.510 [2024-04-17 06:51:34.091012] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.510 [2024-04-17 06:51:34.091019] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.510 [2024-04-17 06:51:34.091025] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7a8af0) 00:25:29.510 [2024-04-17 06:51:34.091036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.510 [2024-04-17 06:51:34.091062] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x813890, cid 3, qid 0 00:25:29.510 [2024-04-17 06:51:34.095188] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.510 [2024-04-17 06:51:34.095204] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.510 [2024-04-17 06:51:34.095211] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.510 [2024-04-17 06:51:34.095218] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x813890) on tqpair=0x7a8af0 00:25:29.510 [2024-04-17 06:51:34.095241] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:29.510 [2024-04-17 06:51:34.095253] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:29.510 [2024-04-17 06:51:34.095271] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:29.510 [2024-04-17 06:51:34.095280] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:29.510 [2024-04-17 06:51:34.095286] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7a8af0) 00:25:29.510 [2024-04-17 06:51:34.095297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.510 [2024-04-17 06:51:34.095319] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x813890, cid 3, qid 0 00:25:29.511 [2024-04-17 06:51:34.095476] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:29.511 [2024-04-17 06:51:34.095491] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:29.511 [2024-04-17 06:51:34.095498] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:29.511 [2024-04-17 06:51:34.095504] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x813890) on tqpair=0x7a8af0 00:25:29.511 [2024-04-17 06:51:34.095517] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:25:29.769 0 Kelvin (-273 Celsius) 00:25:29.769 Available Spare: 0% 00:25:29.769 Available Spare Threshold: 0% 00:25:29.769 Life Percentage Used: 0% 00:25:29.769 Data Units Read: 0 00:25:29.769 Data Units Written: 0 00:25:29.769 Host Read Commands: 0 00:25:29.769 Host Write Commands: 0 00:25:29.769 Controller Busy Time: 0 minutes 00:25:29.769 Power Cycles: 0 00:25:29.769 Power On Hours: 0 hours 00:25:29.769 Unsafe Shutdowns: 0 00:25:29.769 Unrecoverable Media Errors: 0 00:25:29.769 Lifetime Error Log Entries: 0 00:25:29.769 Warning Temperature Time: 0 minutes 00:25:29.769 Critical Temperature Time: 0 minutes 00:25:29.769 00:25:29.769 Number of Queues 00:25:29.769 ================ 00:25:29.769 Number of I/O Submission Queues: 127 00:25:29.769 Number of I/O Completion Queues: 127 00:25:29.769 00:25:29.769 Active Namespaces 00:25:29.769 ================= 00:25:29.769 Namespace ID:1 00:25:29.769 Error Recovery Timeout: Unlimited 00:25:29.769 Command Set Identifier: NVM (00h) 00:25:29.769 Deallocate: Supported 00:25:29.769 Deallocated/Unwritten Error: Not Supported 00:25:29.769 Deallocated Read Value: Unknown 00:25:29.769 Deallocate in Write Zeroes: Not Supported 00:25:29.769 Deallocated Guard Field: 0xFFFF 00:25:29.769 Flush: Supported 00:25:29.769 Reservation: Supported 00:25:29.769 Namespace Sharing Capabilities: Multiple Controllers 00:25:29.769 Size (in LBAs): 131072 (0GiB) 00:25:29.769 Capacity (in LBAs): 131072 (0GiB) 00:25:29.769 Utilization (in LBAs): 131072 (0GiB) 00:25:29.769 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:29.769 EUI64: ABCDEF0123456789 00:25:29.769 UUID: 3f66b2af-7eb4-4bf6-9429-6f8c7aee268f 00:25:29.769 Thin Provisioning: Not Supported 00:25:29.769 Per-NS Atomic Units: Yes 00:25:29.769 Atomic Boundary Size (Normal): 0 00:25:29.769 Atomic Boundary Size (PFail): 0 00:25:29.769 Atomic Boundary Offset: 0 00:25:29.769 Maximum Single Source Range Length: 65535 00:25:29.769 Maximum Copy Length: 65535 00:25:29.769 Maximum Source Range Count: 1 00:25:29.769 NGUID/EUI64 Never Reused: No 00:25:29.769 Namespace Write Protected: No 00:25:29.769 Number of LBA Formats: 1 00:25:29.769 Current LBA Format: LBA Format #00 00:25:29.769 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:29.769 00:25:29.769 06:51:34 -- host/identify.sh@51 -- # sync 00:25:29.769 06:51:34 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:29.769 06:51:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.769 06:51:34 -- common/autotest_common.sh@10 -- # set +x 00:25:29.769 06:51:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.769 06:51:34 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:29.769 06:51:34 -- host/identify.sh@56 -- # nvmftestfini 00:25:29.769 06:51:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:29.769 06:51:34 -- nvmf/common.sh@117 -- # sync 00:25:29.769 06:51:34 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:29.769 06:51:34 -- nvmf/common.sh@120 -- # set +e 00:25:29.769 06:51:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:29.769 06:51:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:29.770 rmmod nvme_tcp 00:25:29.770 rmmod nvme_fabrics 00:25:29.770 rmmod nvme_keyring 00:25:29.770 06:51:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:29.770 06:51:34 -- nvmf/common.sh@124 -- # set -e 00:25:29.770 06:51:34 -- nvmf/common.sh@125 -- # return 0 00:25:29.770 06:51:34 -- nvmf/common.sh@478 -- # '[' -n 68543 ']' 00:25:29.770 06:51:34 -- nvmf/common.sh@479 -- # killprocess 68543 00:25:29.770 06:51:34 -- common/autotest_common.sh@936 -- # '[' -z 68543 ']' 00:25:29.770 06:51:34 -- common/autotest_common.sh@940 -- # kill -0 68543 00:25:29.770 06:51:34 -- common/autotest_common.sh@941 -- # uname 00:25:29.770 06:51:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:29.770 06:51:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68543 00:25:29.770 06:51:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:29.770 06:51:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:29.770 06:51:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68543' 00:25:29.770 killing process with pid 68543 00:25:29.770 06:51:34 -- common/autotest_common.sh@955 -- # kill 68543 00:25:29.770 [2024-04-17 06:51:34.210676] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:29.770 06:51:34 -- common/autotest_common.sh@960 -- # wait 68543 00:25:30.028 06:51:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:30.028 06:51:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:30.028 06:51:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:30.028 06:51:34 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:30.028 06:51:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:30.028 06:51:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.028 06:51:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:30.028 06:51:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.929 06:51:36 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:31.929 00:25:31.929 real 0m5.390s 00:25:31.929 user 0m4.391s 00:25:31.929 sys 0m1.848s 00:25:31.929 06:51:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:31.929 06:51:36 -- common/autotest_common.sh@10 -- # set +x 00:25:31.929 ************************************ 00:25:31.929 END TEST nvmf_identify 00:25:31.929 ************************************ 00:25:32.188 06:51:36 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:32.188 06:51:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:32.188 06:51:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:32.188 06:51:36 -- common/autotest_common.sh@10 -- # set +x 00:25:32.188 ************************************ 00:25:32.188 START TEST nvmf_perf 00:25:32.188 ************************************ 00:25:32.188 06:51:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:32.188 * Looking for test storage... 00:25:32.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:32.188 06:51:36 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:32.188 06:51:36 -- nvmf/common.sh@7 -- # uname -s 00:25:32.188 06:51:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:32.188 06:51:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:32.188 06:51:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:32.188 06:51:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:32.188 06:51:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:32.188 06:51:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:32.188 06:51:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:32.188 06:51:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:32.188 06:51:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:32.188 06:51:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:32.188 06:51:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:32.188 06:51:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:32.188 06:51:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:32.188 06:51:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:32.188 06:51:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:32.188 06:51:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:32.188 06:51:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:32.188 06:51:36 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:32.188 06:51:36 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:32.188 06:51:36 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:32.188 06:51:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.189 06:51:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.189 06:51:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.189 06:51:36 -- paths/export.sh@5 -- # export PATH 00:25:32.189 06:51:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.189 06:51:36 -- nvmf/common.sh@47 -- # : 0 00:25:32.189 06:51:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:32.189 06:51:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:32.189 06:51:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:32.189 06:51:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:32.189 06:51:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:32.189 06:51:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:32.189 06:51:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:32.189 06:51:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:32.189 06:51:36 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:32.189 06:51:36 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:32.189 06:51:36 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:32.189 06:51:36 -- host/perf.sh@17 -- # nvmftestinit 00:25:32.189 06:51:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:32.189 06:51:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:32.189 06:51:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:32.189 06:51:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:32.189 06:51:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:32.189 06:51:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.189 06:51:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:32.189 06:51:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.189 06:51:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:32.189 06:51:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:32.189 06:51:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:32.189 06:51:36 -- common/autotest_common.sh@10 -- # set +x 00:25:34.089 06:51:38 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:34.089 06:51:38 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:34.089 06:51:38 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:34.089 06:51:38 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:34.089 06:51:38 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:34.089 06:51:38 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:34.089 06:51:38 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:34.089 06:51:38 -- nvmf/common.sh@295 -- # net_devs=() 00:25:34.089 06:51:38 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:34.089 06:51:38 -- nvmf/common.sh@296 -- # e810=() 00:25:34.089 06:51:38 -- nvmf/common.sh@296 -- # local -ga e810 00:25:34.089 06:51:38 -- nvmf/common.sh@297 -- # x722=() 00:25:34.089 06:51:38 -- nvmf/common.sh@297 -- # local -ga x722 00:25:34.089 06:51:38 -- nvmf/common.sh@298 -- # mlx=() 00:25:34.089 06:51:38 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:34.089 06:51:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:34.089 06:51:38 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:34.089 06:51:38 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:34.089 06:51:38 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:34.089 06:51:38 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:34.089 06:51:38 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:34.089 06:51:38 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:34.089 06:51:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:34.089 06:51:38 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:34.089 06:51:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:34.089 06:51:38 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:34.089 06:51:38 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:34.089 06:51:38 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:34.089 06:51:38 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:34.089 06:51:38 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:34.089 06:51:38 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:34.089 06:51:38 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:34.089 06:51:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:34.089 06:51:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:34.089 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:34.089 06:51:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:34.089 06:51:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:34.089 06:51:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.089 06:51:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.089 06:51:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:34.089 06:51:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:34.089 06:51:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:34.089 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:34.089 06:51:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:34.089 06:51:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:34.089 06:51:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.089 06:51:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.089 06:51:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:34.089 06:51:38 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:34.089 06:51:38 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:34.089 06:51:38 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:34.089 06:51:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:34.089 06:51:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.089 06:51:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:34.089 06:51:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.089 06:51:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:34.089 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:34.089 06:51:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.089 06:51:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:34.090 06:51:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.090 06:51:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:34.090 06:51:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.090 06:51:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:34.090 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:34.090 06:51:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.090 06:51:38 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:34.090 06:51:38 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:34.090 06:51:38 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:34.090 06:51:38 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:34.090 06:51:38 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:34.090 06:51:38 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:34.090 06:51:38 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:34.090 06:51:38 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:34.090 06:51:38 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:34.090 06:51:38 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:34.090 06:51:38 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:34.090 06:51:38 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:34.090 06:51:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:34.090 06:51:38 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:34.090 06:51:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:34.090 06:51:38 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:34.090 06:51:38 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:34.090 06:51:38 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:34.348 06:51:38 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:34.348 06:51:38 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:34.348 06:51:38 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:34.348 06:51:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:34.348 06:51:38 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:34.348 06:51:38 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:34.348 06:51:38 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:34.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:34.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:25:34.348 00:25:34.348 --- 10.0.0.2 ping statistics --- 00:25:34.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.348 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:25:34.348 06:51:38 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:34.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:34.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:25:34.348 00:25:34.348 --- 10.0.0.1 ping statistics --- 00:25:34.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.348 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:25:34.348 06:51:38 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:34.348 06:51:38 -- nvmf/common.sh@411 -- # return 0 00:25:34.348 06:51:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:34.348 06:51:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:34.348 06:51:38 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:34.348 06:51:38 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:34.348 06:51:38 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:34.348 06:51:38 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:34.348 06:51:38 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:34.348 06:51:38 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:34.348 06:51:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:34.348 06:51:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:34.348 06:51:38 -- common/autotest_common.sh@10 -- # set +x 00:25:34.348 06:51:38 -- nvmf/common.sh@470 -- # nvmfpid=70629 00:25:34.348 06:51:38 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:34.348 06:51:38 -- nvmf/common.sh@471 -- # waitforlisten 70629 00:25:34.348 06:51:38 -- common/autotest_common.sh@817 -- # '[' -z 70629 ']' 00:25:34.348 06:51:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.348 06:51:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:34.348 06:51:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.348 06:51:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:34.348 06:51:38 -- common/autotest_common.sh@10 -- # set +x 00:25:34.348 [2024-04-17 06:51:38.837043] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:25:34.348 [2024-04-17 06:51:38.837138] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:34.348 EAL: No free 2048 kB hugepages reported on node 1 00:25:34.348 [2024-04-17 06:51:38.909705] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:34.606 [2024-04-17 06:51:39.001741] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:34.606 [2024-04-17 06:51:39.001793] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:34.606 [2024-04-17 06:51:39.001821] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:34.606 [2024-04-17 06:51:39.001833] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:34.606 [2024-04-17 06:51:39.001843] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:34.606 [2024-04-17 06:51:39.001899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.606 [2024-04-17 06:51:39.001924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:34.606 [2024-04-17 06:51:39.001994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:34.606 [2024-04-17 06:51:39.001997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.606 06:51:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:34.606 06:51:39 -- common/autotest_common.sh@850 -- # return 0 00:25:34.606 06:51:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:34.606 06:51:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:34.606 06:51:39 -- common/autotest_common.sh@10 -- # set +x 00:25:34.606 06:51:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:34.606 06:51:39 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:34.606 06:51:39 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:37.885 06:51:42 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:37.885 06:51:42 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:37.885 06:51:42 -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:25:37.885 06:51:42 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:38.143 06:51:42 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:38.143 06:51:42 -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:25:38.143 06:51:42 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:38.143 06:51:42 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:38.143 06:51:42 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:38.400 [2024-04-17 06:51:42.953438] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:38.400 06:51:42 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:38.657 06:51:43 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:38.657 06:51:43 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:38.914 06:51:43 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:38.914 06:51:43 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:39.172 06:51:43 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:39.430 [2024-04-17 06:51:43.915940] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:39.430 06:51:43 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:39.687 06:51:44 -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:25:39.687 06:51:44 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:25:39.687 06:51:44 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:39.687 06:51:44 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:25:41.059 Initializing NVMe Controllers 00:25:41.059 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:25:41.059 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:25:41.059 Initialization complete. Launching workers. 00:25:41.059 ======================================================== 00:25:41.059 Latency(us) 00:25:41.059 Device Information : IOPS MiB/s Average min max 00:25:41.059 PCIE (0000:88:00.0) NSID 1 from core 0: 86360.76 337.35 370.05 11.27 6268.14 00:25:41.059 ======================================================== 00:25:41.059 Total : 86360.76 337.35 370.05 11.27 6268.14 00:25:41.059 00:25:41.059 06:51:45 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:41.059 EAL: No free 2048 kB hugepages reported on node 1 00:25:42.431 Initializing NVMe Controllers 00:25:42.431 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:42.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:42.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:42.431 Initialization complete. Launching workers. 00:25:42.431 ======================================================== 00:25:42.431 Latency(us) 00:25:42.431 Device Information : IOPS MiB/s Average min max 00:25:42.431 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 63.00 0.25 16121.14 235.72 44409.10 00:25:42.431 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 45.00 0.18 22321.73 5969.47 55849.26 00:25:42.431 ======================================================== 00:25:42.431 Total : 108.00 0.42 18704.72 235.72 55849.26 00:25:42.431 00:25:42.431 06:51:46 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:42.431 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.804 Initializing NVMe Controllers 00:25:43.804 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:43.804 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:43.804 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:43.804 Initialization complete. Launching workers. 00:25:43.804 ======================================================== 00:25:43.804 Latency(us) 00:25:43.804 Device Information : IOPS MiB/s Average min max 00:25:43.804 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8241.98 32.20 3895.02 472.24 7675.42 00:25:43.804 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3912.99 15.29 8215.69 6107.61 15840.81 00:25:43.804 ======================================================== 00:25:43.804 Total : 12154.98 47.48 5285.95 472.24 15840.81 00:25:43.804 00:25:43.804 06:51:48 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:43.804 06:51:48 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:43.804 06:51:48 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:43.804 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.332 Initializing NVMe Controllers 00:25:46.332 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:46.332 Controller IO queue size 128, less than required. 00:25:46.332 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:46.332 Controller IO queue size 128, less than required. 00:25:46.332 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:46.332 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:46.332 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:46.332 Initialization complete. Launching workers. 00:25:46.332 ======================================================== 00:25:46.332 Latency(us) 00:25:46.332 Device Information : IOPS MiB/s Average min max 00:25:46.332 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1110.76 277.69 119029.17 70593.92 207834.40 00:25:46.332 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 575.87 143.97 232980.47 101797.74 386406.24 00:25:46.332 ======================================================== 00:25:46.332 Total : 1686.63 421.66 157936.08 70593.92 386406.24 00:25:46.332 00:25:46.332 06:51:50 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:46.332 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.589 No valid NVMe controllers or AIO or URING devices found 00:25:46.589 Initializing NVMe Controllers 00:25:46.589 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:46.589 Controller IO queue size 128, less than required. 00:25:46.589 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:46.589 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:46.589 Controller IO queue size 128, less than required. 00:25:46.589 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:46.589 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:46.589 WARNING: Some requested NVMe devices were skipped 00:25:46.589 06:51:50 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:46.589 EAL: No free 2048 kB hugepages reported on node 1 00:25:49.116 Initializing NVMe Controllers 00:25:49.116 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:49.116 Controller IO queue size 128, less than required. 00:25:49.116 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:49.116 Controller IO queue size 128, less than required. 00:25:49.116 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:49.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:49.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:49.116 Initialization complete. Launching workers. 00:25:49.116 00:25:49.116 ==================== 00:25:49.116 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:49.116 TCP transport: 00:25:49.116 polls: 6829 00:25:49.116 idle_polls: 4989 00:25:49.116 sock_completions: 1840 00:25:49.116 nvme_completions: 3185 00:25:49.116 submitted_requests: 4748 00:25:49.116 queued_requests: 1 00:25:49.116 00:25:49.116 ==================== 00:25:49.116 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:49.116 TCP transport: 00:25:49.116 polls: 7145 00:25:49.116 idle_polls: 5280 00:25:49.116 sock_completions: 1865 00:25:49.116 nvme_completions: 3727 00:25:49.116 submitted_requests: 5568 00:25:49.116 queued_requests: 1 00:25:49.116 ======================================================== 00:25:49.116 Latency(us) 00:25:49.116 Device Information : IOPS MiB/s Average min max 00:25:49.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 794.94 198.73 165322.32 76411.87 267971.31 00:25:49.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 930.26 232.56 140421.66 74022.97 188012.53 00:25:49.116 ======================================================== 00:25:49.116 Total : 1725.20 431.30 151895.43 74022.97 267971.31 00:25:49.116 00:25:49.116 06:51:53 -- host/perf.sh@66 -- # sync 00:25:49.116 06:51:53 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:49.374 06:51:53 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:25:49.374 06:51:53 -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:25:49.374 06:51:53 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:25:53.551 06:51:57 -- host/perf.sh@72 -- # ls_guid=396934b9-bf6e-4416-9486-fd99d92a41c3 00:25:53.551 06:51:57 -- host/perf.sh@73 -- # get_lvs_free_mb 396934b9-bf6e-4416-9486-fd99d92a41c3 00:25:53.551 06:51:57 -- common/autotest_common.sh@1350 -- # local lvs_uuid=396934b9-bf6e-4416-9486-fd99d92a41c3 00:25:53.551 06:51:57 -- common/autotest_common.sh@1351 -- # local lvs_info 00:25:53.551 06:51:57 -- common/autotest_common.sh@1352 -- # local fc 00:25:53.551 06:51:57 -- common/autotest_common.sh@1353 -- # local cs 00:25:53.551 06:51:57 -- common/autotest_common.sh@1354 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:53.551 06:51:57 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:25:53.551 { 00:25:53.551 "uuid": "396934b9-bf6e-4416-9486-fd99d92a41c3", 00:25:53.551 "name": "lvs_0", 00:25:53.551 "base_bdev": "Nvme0n1", 00:25:53.551 "total_data_clusters": 238234, 00:25:53.551 "free_clusters": 238234, 00:25:53.551 "block_size": 512, 00:25:53.551 "cluster_size": 4194304 00:25:53.551 } 00:25:53.551 ]' 00:25:53.551 06:51:57 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="396934b9-bf6e-4416-9486-fd99d92a41c3") .free_clusters' 00:25:53.551 06:51:57 -- common/autotest_common.sh@1355 -- # fc=238234 00:25:53.551 06:51:57 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="396934b9-bf6e-4416-9486-fd99d92a41c3") .cluster_size' 00:25:53.551 06:51:57 -- common/autotest_common.sh@1356 -- # cs=4194304 00:25:53.551 06:51:57 -- common/autotest_common.sh@1359 -- # free_mb=952936 00:25:53.551 06:51:57 -- common/autotest_common.sh@1360 -- # echo 952936 00:25:53.551 952936 00:25:53.551 06:51:57 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:25:53.551 06:51:57 -- host/perf.sh@78 -- # free_mb=20480 00:25:53.551 06:51:57 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 396934b9-bf6e-4416-9486-fd99d92a41c3 lbd_0 20480 00:25:53.551 06:51:58 -- host/perf.sh@80 -- # lb_guid=5be18771-d802-4fd8-83e5-90e139d80fcc 00:25:53.551 06:51:58 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 5be18771-d802-4fd8-83e5-90e139d80fcc lvs_n_0 00:25:54.481 06:51:58 -- host/perf.sh@83 -- # ls_nested_guid=65b50574-4d82-44a5-96a5-d928c51da323 00:25:54.481 06:51:58 -- host/perf.sh@84 -- # get_lvs_free_mb 65b50574-4d82-44a5-96a5-d928c51da323 00:25:54.481 06:51:58 -- common/autotest_common.sh@1350 -- # local lvs_uuid=65b50574-4d82-44a5-96a5-d928c51da323 00:25:54.481 06:51:58 -- common/autotest_common.sh@1351 -- # local lvs_info 00:25:54.481 06:51:58 -- common/autotest_common.sh@1352 -- # local fc 00:25:54.481 06:51:58 -- common/autotest_common.sh@1353 -- # local cs 00:25:54.481 06:51:58 -- common/autotest_common.sh@1354 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:54.481 06:51:59 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:25:54.481 { 00:25:54.481 "uuid": "396934b9-bf6e-4416-9486-fd99d92a41c3", 00:25:54.481 "name": "lvs_0", 00:25:54.481 "base_bdev": "Nvme0n1", 00:25:54.481 "total_data_clusters": 238234, 00:25:54.481 "free_clusters": 233114, 00:25:54.481 "block_size": 512, 00:25:54.481 "cluster_size": 4194304 00:25:54.481 }, 00:25:54.481 { 00:25:54.481 "uuid": "65b50574-4d82-44a5-96a5-d928c51da323", 00:25:54.481 "name": "lvs_n_0", 00:25:54.481 "base_bdev": "5be18771-d802-4fd8-83e5-90e139d80fcc", 00:25:54.481 "total_data_clusters": 5114, 00:25:54.481 "free_clusters": 5114, 00:25:54.481 "block_size": 512, 00:25:54.481 "cluster_size": 4194304 00:25:54.481 } 00:25:54.481 ]' 00:25:54.481 06:51:59 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="65b50574-4d82-44a5-96a5-d928c51da323") .free_clusters' 00:25:54.481 06:51:59 -- common/autotest_common.sh@1355 -- # fc=5114 00:25:54.481 06:51:59 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="65b50574-4d82-44a5-96a5-d928c51da323") .cluster_size' 00:25:54.481 06:51:59 -- common/autotest_common.sh@1356 -- # cs=4194304 00:25:54.481 06:51:59 -- common/autotest_common.sh@1359 -- # free_mb=20456 00:25:54.481 06:51:59 -- common/autotest_common.sh@1360 -- # echo 20456 00:25:54.481 20456 00:25:54.481 06:51:59 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:25:54.481 06:51:59 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 65b50574-4d82-44a5-96a5-d928c51da323 lbd_nest_0 20456 00:25:54.739 06:51:59 -- host/perf.sh@88 -- # lb_nested_guid=eade155e-eefd-4087-b4c7-7a2454ed2421 00:25:54.739 06:51:59 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:54.996 06:51:59 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:25:54.996 06:51:59 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 eade155e-eefd-4087-b4c7-7a2454ed2421 00:25:55.253 06:51:59 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:55.510 06:52:00 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:25:55.510 06:52:00 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:25:55.510 06:52:00 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:55.510 06:52:00 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:55.510 06:52:00 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:55.510 EAL: No free 2048 kB hugepages reported on node 1 00:26:07.731 Initializing NVMe Controllers 00:26:07.731 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:07.731 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:07.731 Initialization complete. Launching workers. 00:26:07.731 ======================================================== 00:26:07.731 Latency(us) 00:26:07.731 Device Information : IOPS MiB/s Average min max 00:26:07.731 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 49.90 0.02 20040.62 214.94 46718.78 00:26:07.731 ======================================================== 00:26:07.731 Total : 49.90 0.02 20040.62 214.94 46718.78 00:26:07.731 00:26:07.731 06:52:10 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:07.731 06:52:10 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:07.731 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.690 Initializing NVMe Controllers 00:26:17.690 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:17.690 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:17.690 Initialization complete. Launching workers. 00:26:17.690 ======================================================== 00:26:17.690 Latency(us) 00:26:17.690 Device Information : IOPS MiB/s Average min max 00:26:17.690 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 68.19 8.52 14664.47 4976.85 50859.52 00:26:17.690 ======================================================== 00:26:17.690 Total : 68.19 8.52 14664.47 4976.85 50859.52 00:26:17.690 00:26:17.690 06:52:20 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:17.690 06:52:20 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:17.690 06:52:20 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:17.690 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.650 Initializing NVMe Controllers 00:26:27.650 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:27.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:27.650 Initialization complete. Launching workers. 00:26:27.650 ======================================================== 00:26:27.650 Latency(us) 00:26:27.650 Device Information : IOPS MiB/s Average min max 00:26:27.650 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7143.97 3.49 4484.15 648.68 44529.93 00:26:27.650 ======================================================== 00:26:27.650 Total : 7143.97 3.49 4484.15 648.68 44529.93 00:26:27.650 00:26:27.650 06:52:31 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:27.650 06:52:31 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:27.650 EAL: No free 2048 kB hugepages reported on node 1 00:26:37.620 Initializing NVMe Controllers 00:26:37.620 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:37.621 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:37.621 Initialization complete. Launching workers. 00:26:37.621 ======================================================== 00:26:37.621 Latency(us) 00:26:37.621 Device Information : IOPS MiB/s Average min max 00:26:37.621 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1710.90 213.86 18706.98 1871.92 39282.89 00:26:37.621 ======================================================== 00:26:37.621 Total : 1710.90 213.86 18706.98 1871.92 39282.89 00:26:37.621 00:26:37.621 06:52:41 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:37.621 06:52:41 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:37.621 06:52:41 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:37.621 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.610 Initializing NVMe Controllers 00:26:47.610 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:47.610 Controller IO queue size 128, less than required. 00:26:47.610 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:47.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:47.610 Initialization complete. Launching workers. 00:26:47.610 ======================================================== 00:26:47.610 Latency(us) 00:26:47.610 Device Information : IOPS MiB/s Average min max 00:26:47.610 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10467.45 5.11 12235.08 2078.52 25299.11 00:26:47.610 ======================================================== 00:26:47.610 Total : 10467.45 5.11 12235.08 2078.52 25299.11 00:26:47.610 00:26:47.611 06:52:51 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:47.611 06:52:51 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:47.611 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.577 Initializing NVMe Controllers 00:26:57.577 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:57.577 Controller IO queue size 128, less than required. 00:26:57.577 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:57.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:57.577 Initialization complete. Launching workers. 00:26:57.577 ======================================================== 00:26:57.577 Latency(us) 00:26:57.577 Device Information : IOPS MiB/s Average min max 00:26:57.577 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1224.14 153.02 105190.72 23952.53 223485.60 00:26:57.577 ======================================================== 00:26:57.577 Total : 1224.14 153.02 105190.72 23952.53 223485.60 00:26:57.577 00:26:57.577 06:53:02 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:57.835 06:53:02 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete eade155e-eefd-4087-b4c7-7a2454ed2421 00:26:58.767 06:53:03 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:26:59.024 06:53:03 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5be18771-d802-4fd8-83e5-90e139d80fcc 00:26:59.281 06:53:03 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:26:59.539 06:53:03 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:59.539 06:53:03 -- host/perf.sh@114 -- # nvmftestfini 00:26:59.539 06:53:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:59.539 06:53:03 -- nvmf/common.sh@117 -- # sync 00:26:59.539 06:53:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:59.539 06:53:03 -- nvmf/common.sh@120 -- # set +e 00:26:59.539 06:53:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:59.539 06:53:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:59.539 rmmod nvme_tcp 00:26:59.539 rmmod nvme_fabrics 00:26:59.539 rmmod nvme_keyring 00:26:59.539 06:53:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:59.539 06:53:03 -- nvmf/common.sh@124 -- # set -e 00:26:59.539 06:53:03 -- nvmf/common.sh@125 -- # return 0 00:26:59.539 06:53:03 -- nvmf/common.sh@478 -- # '[' -n 70629 ']' 00:26:59.539 06:53:03 -- nvmf/common.sh@479 -- # killprocess 70629 00:26:59.539 06:53:03 -- common/autotest_common.sh@936 -- # '[' -z 70629 ']' 00:26:59.539 06:53:03 -- common/autotest_common.sh@940 -- # kill -0 70629 00:26:59.539 06:53:03 -- common/autotest_common.sh@941 -- # uname 00:26:59.539 06:53:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:59.539 06:53:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70629 00:26:59.539 06:53:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:59.539 06:53:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:59.540 06:53:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70629' 00:26:59.540 killing process with pid 70629 00:26:59.540 06:53:04 -- common/autotest_common.sh@955 -- # kill 70629 00:26:59.540 06:53:04 -- common/autotest_common.sh@960 -- # wait 70629 00:27:01.435 06:53:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:01.435 06:53:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:01.435 06:53:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:01.435 06:53:05 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:01.435 06:53:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:01.435 06:53:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.435 06:53:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:01.435 06:53:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.338 06:53:07 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:03.338 00:27:03.338 real 1m31.025s 00:27:03.338 user 5m31.677s 00:27:03.338 sys 0m17.334s 00:27:03.338 06:53:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:03.338 06:53:07 -- common/autotest_common.sh@10 -- # set +x 00:27:03.338 ************************************ 00:27:03.338 END TEST nvmf_perf 00:27:03.338 ************************************ 00:27:03.338 06:53:07 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:03.338 06:53:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:03.338 06:53:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:03.338 06:53:07 -- common/autotest_common.sh@10 -- # set +x 00:27:03.338 ************************************ 00:27:03.338 START TEST nvmf_fio_host 00:27:03.338 ************************************ 00:27:03.338 06:53:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:03.338 * Looking for test storage... 00:27:03.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:03.338 06:53:07 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:03.338 06:53:07 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:03.338 06:53:07 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:03.338 06:53:07 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:03.338 06:53:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.338 06:53:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.338 06:53:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.338 06:53:07 -- paths/export.sh@5 -- # export PATH 00:27:03.338 06:53:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.338 06:53:07 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:03.338 06:53:07 -- nvmf/common.sh@7 -- # uname -s 00:27:03.338 06:53:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:03.338 06:53:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:03.338 06:53:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:03.338 06:53:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:03.338 06:53:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:03.339 06:53:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:03.339 06:53:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:03.339 06:53:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:03.339 06:53:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:03.339 06:53:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:03.339 06:53:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:03.339 06:53:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:03.339 06:53:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:03.339 06:53:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:03.339 06:53:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:03.339 06:53:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:03.339 06:53:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:03.339 06:53:07 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:03.339 06:53:07 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:03.339 06:53:07 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:03.339 06:53:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.339 06:53:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.339 06:53:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.339 06:53:07 -- paths/export.sh@5 -- # export PATH 00:27:03.339 06:53:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.339 06:53:07 -- nvmf/common.sh@47 -- # : 0 00:27:03.339 06:53:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:03.339 06:53:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:03.339 06:53:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:03.339 06:53:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:03.339 06:53:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:03.339 06:53:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:03.339 06:53:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:03.339 06:53:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:03.339 06:53:07 -- host/fio.sh@12 -- # nvmftestinit 00:27:03.339 06:53:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:03.339 06:53:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:03.339 06:53:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:03.339 06:53:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:03.339 06:53:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:03.339 06:53:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.339 06:53:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:03.339 06:53:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.339 06:53:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:27:03.339 06:53:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:27:03.339 06:53:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:03.339 06:53:07 -- common/autotest_common.sh@10 -- # set +x 00:27:05.238 06:53:09 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:05.238 06:53:09 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:05.238 06:53:09 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:05.238 06:53:09 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:05.238 06:53:09 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:05.238 06:53:09 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:05.238 06:53:09 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:05.238 06:53:09 -- nvmf/common.sh@295 -- # net_devs=() 00:27:05.238 06:53:09 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:05.238 06:53:09 -- nvmf/common.sh@296 -- # e810=() 00:27:05.238 06:53:09 -- nvmf/common.sh@296 -- # local -ga e810 00:27:05.238 06:53:09 -- nvmf/common.sh@297 -- # x722=() 00:27:05.238 06:53:09 -- nvmf/common.sh@297 -- # local -ga x722 00:27:05.238 06:53:09 -- nvmf/common.sh@298 -- # mlx=() 00:27:05.238 06:53:09 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:05.238 06:53:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.238 06:53:09 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.238 06:53:09 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.238 06:53:09 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.238 06:53:09 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.238 06:53:09 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.238 06:53:09 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.238 06:53:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.238 06:53:09 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.238 06:53:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.238 06:53:09 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.238 06:53:09 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:05.238 06:53:09 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:05.238 06:53:09 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:05.238 06:53:09 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:05.238 06:53:09 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:05.238 06:53:09 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:05.238 06:53:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:05.238 06:53:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:05.238 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:05.238 06:53:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:05.238 06:53:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:05.238 06:53:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.238 06:53:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.238 06:53:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:05.238 06:53:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:05.238 06:53:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:05.238 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:05.238 06:53:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:05.238 06:53:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:05.238 06:53:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.238 06:53:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.238 06:53:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:05.238 06:53:09 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:05.238 06:53:09 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:05.238 06:53:09 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:05.238 06:53:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:05.238 06:53:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.238 06:53:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:05.238 06:53:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.238 06:53:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:05.238 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:05.238 06:53:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.238 06:53:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:05.238 06:53:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.238 06:53:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:05.238 06:53:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.238 06:53:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:05.238 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:05.238 06:53:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.238 06:53:09 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:05.238 06:53:09 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:05.238 06:53:09 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:05.238 06:53:09 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:05.238 06:53:09 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:05.238 06:53:09 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:05.238 06:53:09 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:05.238 06:53:09 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:05.238 06:53:09 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:05.238 06:53:09 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:05.238 06:53:09 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:05.238 06:53:09 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:05.238 06:53:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:05.238 06:53:09 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:05.238 06:53:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:05.238 06:53:09 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:05.238 06:53:09 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:05.238 06:53:09 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:05.238 06:53:09 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:05.238 06:53:09 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:05.238 06:53:09 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:05.238 06:53:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:05.238 06:53:09 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:05.238 06:53:09 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:05.238 06:53:09 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:05.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:05.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:27:05.238 00:27:05.238 --- 10.0.0.2 ping statistics --- 00:27:05.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.238 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:27:05.238 06:53:09 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:05.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:05.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:27:05.238 00:27:05.238 --- 10.0.0.1 ping statistics --- 00:27:05.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.238 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:27:05.238 06:53:09 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:05.238 06:53:09 -- nvmf/common.sh@411 -- # return 0 00:27:05.238 06:53:09 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:05.238 06:53:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:05.238 06:53:09 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:05.238 06:53:09 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:05.238 06:53:09 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:05.238 06:53:09 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:05.238 06:53:09 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:05.238 06:53:09 -- host/fio.sh@14 -- # [[ y != y ]] 00:27:05.238 06:53:09 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:27:05.238 06:53:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:05.238 06:53:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.238 06:53:09 -- host/fio.sh@22 -- # nvmfpid=82638 00:27:05.238 06:53:09 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:05.238 06:53:09 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:05.238 06:53:09 -- host/fio.sh@26 -- # waitforlisten 82638 00:27:05.238 06:53:09 -- common/autotest_common.sh@817 -- # '[' -z 82638 ']' 00:27:05.238 06:53:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.238 06:53:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:05.238 06:53:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.238 06:53:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:05.238 06:53:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.496 [2024-04-17 06:53:09.874973] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:27:05.496 [2024-04-17 06:53:09.875042] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:05.496 EAL: No free 2048 kB hugepages reported on node 1 00:27:05.496 [2024-04-17 06:53:09.940951] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:05.496 [2024-04-17 06:53:10.030959] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:05.496 [2024-04-17 06:53:10.031006] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:05.496 [2024-04-17 06:53:10.031031] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:05.496 [2024-04-17 06:53:10.031043] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:05.496 [2024-04-17 06:53:10.031054] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:05.496 [2024-04-17 06:53:10.034203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.496 [2024-04-17 06:53:10.034233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:05.496 [2024-04-17 06:53:10.034291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:05.496 [2024-04-17 06:53:10.034295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.754 06:53:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:05.754 06:53:10 -- common/autotest_common.sh@850 -- # return 0 00:27:05.754 06:53:10 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:05.754 06:53:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:05.754 06:53:10 -- common/autotest_common.sh@10 -- # set +x 00:27:05.754 [2024-04-17 06:53:10.176938] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:05.754 06:53:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:05.754 06:53:10 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:27:05.754 06:53:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:05.754 06:53:10 -- common/autotest_common.sh@10 -- # set +x 00:27:05.754 06:53:10 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:05.754 06:53:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:05.754 06:53:10 -- common/autotest_common.sh@10 -- # set +x 00:27:05.754 Malloc1 00:27:05.754 06:53:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:05.754 06:53:10 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:05.754 06:53:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:05.754 06:53:10 -- common/autotest_common.sh@10 -- # set +x 00:27:05.754 06:53:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:05.754 06:53:10 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:05.754 06:53:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:05.754 06:53:10 -- common/autotest_common.sh@10 -- # set +x 00:27:05.754 06:53:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:05.754 06:53:10 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:05.754 06:53:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:05.754 06:53:10 -- common/autotest_common.sh@10 -- # set +x 00:27:05.754 [2024-04-17 06:53:10.256816] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:05.754 06:53:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:05.754 06:53:10 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:05.754 06:53:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:05.754 06:53:10 -- common/autotest_common.sh@10 -- # set +x 00:27:05.754 06:53:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:05.754 06:53:10 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:05.754 06:53:10 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:05.754 06:53:10 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:05.754 06:53:10 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:05.754 06:53:10 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:05.754 06:53:10 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:05.754 06:53:10 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:05.754 06:53:10 -- common/autotest_common.sh@1327 -- # shift 00:27:05.754 06:53:10 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:05.754 06:53:10 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:05.754 06:53:10 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:05.754 06:53:10 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:05.754 06:53:10 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:05.754 06:53:10 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:05.754 06:53:10 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:05.754 06:53:10 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:05.754 06:53:10 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:05.754 06:53:10 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:05.754 06:53:10 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:05.754 06:53:10 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:05.754 06:53:10 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:05.754 06:53:10 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:05.754 06:53:10 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:06.012 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:06.012 fio-3.35 00:27:06.012 Starting 1 thread 00:27:06.012 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.566 00:27:08.566 test: (groupid=0, jobs=1): err= 0: pid=82855: Wed Apr 17 06:53:12 2024 00:27:08.566 read: IOPS=8622, BW=33.7MiB/s (35.3MB/s)(67.6MiB/2007msec) 00:27:08.566 slat (usec): min=2, max=160, avg= 2.59, stdev= 1.95 00:27:08.566 clat (usec): min=2488, max=13930, avg=8217.16, stdev=670.89 00:27:08.566 lat (usec): min=2515, max=13932, avg=8219.76, stdev=670.79 00:27:08.566 clat percentiles (usec): 00:27:08.566 | 1.00th=[ 6849], 5.00th=[ 7242], 10.00th=[ 7439], 20.00th=[ 7701], 00:27:08.566 | 30.00th=[ 7898], 40.00th=[ 8029], 50.00th=[ 8225], 60.00th=[ 8356], 00:27:08.566 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 8979], 95.00th=[ 9241], 00:27:08.566 | 99.00th=[ 9765], 99.50th=[10159], 99.90th=[12256], 99.95th=[13566], 00:27:08.566 | 99.99th=[13960] 00:27:08.566 bw ( KiB/s): min=33200, max=35488, per=99.93%, avg=34468.00, stdev=997.71, samples=4 00:27:08.566 iops : min= 8300, max= 8872, avg=8617.00, stdev=249.43, samples=4 00:27:08.566 write: IOPS=8618, BW=33.7MiB/s (35.3MB/s)(67.6MiB/2007msec); 0 zone resets 00:27:08.566 slat (usec): min=2, max=141, avg= 2.75, stdev= 1.46 00:27:08.566 clat (usec): min=1447, max=13205, avg=6579.42, stdev=590.32 00:27:08.566 lat (usec): min=1457, max=13207, avg=6582.17, stdev=590.27 00:27:08.566 clat percentiles (usec): 00:27:08.566 | 1.00th=[ 5342], 5.00th=[ 5735], 10.00th=[ 5932], 20.00th=[ 6128], 00:27:08.566 | 30.00th=[ 6325], 40.00th=[ 6456], 50.00th=[ 6587], 60.00th=[ 6718], 00:27:08.566 | 70.00th=[ 6849], 80.00th=[ 6980], 90.00th=[ 7242], 95.00th=[ 7439], 00:27:08.566 | 99.00th=[ 7898], 99.50th=[ 8160], 99.90th=[11076], 99.95th=[12125], 00:27:08.566 | 99.99th=[13173] 00:27:08.566 bw ( KiB/s): min=32960, max=35200, per=100.00%, avg=34486.00, stdev=1041.05, samples=4 00:27:08.566 iops : min= 8240, max= 8800, avg=8621.50, stdev=260.26, samples=4 00:27:08.566 lat (msec) : 2=0.03%, 4=0.09%, 10=99.46%, 20=0.41% 00:27:08.566 cpu : usr=55.53%, sys=38.63%, ctx=69, majf=0, minf=5 00:27:08.566 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:08.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:08.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:08.566 issued rwts: total=17306,17298,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:08.566 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:08.566 00:27:08.566 Run status group 0 (all jobs): 00:27:08.566 READ: bw=33.7MiB/s (35.3MB/s), 33.7MiB/s-33.7MiB/s (35.3MB/s-35.3MB/s), io=67.6MiB (70.9MB), run=2007-2007msec 00:27:08.566 WRITE: bw=33.7MiB/s (35.3MB/s), 33.7MiB/s-33.7MiB/s (35.3MB/s-35.3MB/s), io=67.6MiB (70.9MB), run=2007-2007msec 00:27:08.566 06:53:12 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:08.566 06:53:12 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:08.566 06:53:12 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:08.566 06:53:12 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:08.566 06:53:12 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:08.566 06:53:12 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:08.566 06:53:12 -- common/autotest_common.sh@1327 -- # shift 00:27:08.566 06:53:12 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:08.566 06:53:12 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:08.566 06:53:12 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:08.566 06:53:12 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:08.566 06:53:12 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:08.566 06:53:12 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:08.566 06:53:12 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:08.566 06:53:12 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:08.566 06:53:12 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:08.566 06:53:12 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:08.566 06:53:12 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:08.566 06:53:12 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:08.566 06:53:12 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:08.566 06:53:12 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:08.566 06:53:12 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:08.566 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:08.566 fio-3.35 00:27:08.566 Starting 1 thread 00:27:08.566 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.090 00:27:11.090 test: (groupid=0, jobs=1): err= 0: pid=83187: Wed Apr 17 06:53:15 2024 00:27:11.090 read: IOPS=8183, BW=128MiB/s (134MB/s)(257MiB/2007msec) 00:27:11.090 slat (usec): min=2, max=103, avg= 3.44, stdev= 1.54 00:27:11.090 clat (usec): min=2637, max=16904, avg=9328.72, stdev=2230.34 00:27:11.090 lat (usec): min=2641, max=16913, avg=9332.16, stdev=2230.41 00:27:11.090 clat percentiles (usec): 00:27:11.090 | 1.00th=[ 4621], 5.00th=[ 5735], 10.00th=[ 6456], 20.00th=[ 7373], 00:27:11.090 | 30.00th=[ 7963], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[ 9896], 00:27:11.090 | 70.00th=[10683], 80.00th=[11207], 90.00th=[12256], 95.00th=[12911], 00:27:11.090 | 99.00th=[14877], 99.50th=[15270], 99.90th=[16188], 99.95th=[16319], 00:27:11.090 | 99.99th=[16712] 00:27:11.090 bw ( KiB/s): min=58016, max=75296, per=51.26%, avg=67112.00, stdev=8976.68, samples=4 00:27:11.090 iops : min= 3626, max= 4706, avg=4194.50, stdev=561.04, samples=4 00:27:11.090 write: IOPS=4888, BW=76.4MiB/s (80.1MB/s)(137MiB/1799msec); 0 zone resets 00:27:11.090 slat (usec): min=30, max=135, avg=32.99, stdev= 4.22 00:27:11.090 clat (usec): min=4868, max=17552, avg=11176.13, stdev=2044.02 00:27:11.090 lat (usec): min=4900, max=17583, avg=11209.13, stdev=2044.08 00:27:11.090 clat percentiles (usec): 00:27:11.090 | 1.00th=[ 7373], 5.00th=[ 8291], 10.00th=[ 8848], 20.00th=[ 9372], 00:27:11.090 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10814], 60.00th=[11469], 00:27:11.090 | 70.00th=[12256], 80.00th=[13042], 90.00th=[14091], 95.00th=[14877], 00:27:11.090 | 99.00th=[16188], 99.50th=[16450], 99.90th=[17433], 99.95th=[17433], 00:27:11.090 | 99.99th=[17433] 00:27:11.090 bw ( KiB/s): min=59904, max=78048, per=89.21%, avg=69784.00, stdev=9374.65, samples=4 00:27:11.090 iops : min= 3744, max= 4878, avg=4361.50, stdev=585.92, samples=4 00:27:11.090 lat (msec) : 4=0.12%, 10=51.03%, 20=48.85% 00:27:11.090 cpu : usr=71.88%, sys=23.88%, ctx=51, majf=0, minf=1 00:27:11.090 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:11.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:11.090 issued rwts: total=16424,8795,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.090 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:11.090 00:27:11.090 Run status group 0 (all jobs): 00:27:11.091 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=257MiB (269MB), run=2007-2007msec 00:27:11.091 WRITE: bw=76.4MiB/s (80.1MB/s), 76.4MiB/s-76.4MiB/s (80.1MB/s-80.1MB/s), io=137MiB (144MB), run=1799-1799msec 00:27:11.091 06:53:15 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:11.091 06:53:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.091 06:53:15 -- common/autotest_common.sh@10 -- # set +x 00:27:11.091 06:53:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.091 06:53:15 -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:27:11.091 06:53:15 -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:27:11.091 06:53:15 -- host/fio.sh@49 -- # get_nvme_bdfs 00:27:11.091 06:53:15 -- common/autotest_common.sh@1499 -- # bdfs=() 00:27:11.091 06:53:15 -- common/autotest_common.sh@1499 -- # local bdfs 00:27:11.091 06:53:15 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:11.091 06:53:15 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:11.091 06:53:15 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:27:11.091 06:53:15 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:27:11.091 06:53:15 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:88:00.0 00:27:11.091 06:53:15 -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:27:11.091 06:53:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.091 06:53:15 -- common/autotest_common.sh@10 -- # set +x 00:27:14.367 Nvme0n1 00:27:14.367 06:53:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:14.367 06:53:18 -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:27:14.367 06:53:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:14.367 06:53:18 -- common/autotest_common.sh@10 -- # set +x 00:27:16.892 06:53:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:16.892 06:53:20 -- host/fio.sh@51 -- # ls_guid=6faa58f4-f6b2-49c8-91b9-2400636f30e0 00:27:16.892 06:53:20 -- host/fio.sh@52 -- # get_lvs_free_mb 6faa58f4-f6b2-49c8-91b9-2400636f30e0 00:27:16.892 06:53:20 -- common/autotest_common.sh@1350 -- # local lvs_uuid=6faa58f4-f6b2-49c8-91b9-2400636f30e0 00:27:16.892 06:53:20 -- common/autotest_common.sh@1351 -- # local lvs_info 00:27:16.892 06:53:20 -- common/autotest_common.sh@1352 -- # local fc 00:27:16.892 06:53:20 -- common/autotest_common.sh@1353 -- # local cs 00:27:16.892 06:53:20 -- common/autotest_common.sh@1354 -- # rpc_cmd bdev_lvol_get_lvstores 00:27:16.892 06:53:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:16.892 06:53:20 -- common/autotest_common.sh@10 -- # set +x 00:27:16.892 06:53:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:16.892 06:53:20 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:27:16.892 { 00:27:16.892 "uuid": "6faa58f4-f6b2-49c8-91b9-2400636f30e0", 00:27:16.892 "name": "lvs_0", 00:27:16.892 "base_bdev": "Nvme0n1", 00:27:16.892 "total_data_clusters": 930, 00:27:16.893 "free_clusters": 930, 00:27:16.893 "block_size": 512, 00:27:16.893 "cluster_size": 1073741824 00:27:16.893 } 00:27:16.893 ]' 00:27:16.893 06:53:20 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="6faa58f4-f6b2-49c8-91b9-2400636f30e0") .free_clusters' 00:27:16.893 06:53:20 -- common/autotest_common.sh@1355 -- # fc=930 00:27:16.893 06:53:20 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="6faa58f4-f6b2-49c8-91b9-2400636f30e0") .cluster_size' 00:27:16.893 06:53:20 -- common/autotest_common.sh@1356 -- # cs=1073741824 00:27:16.893 06:53:20 -- common/autotest_common.sh@1359 -- # free_mb=952320 00:27:16.893 06:53:20 -- common/autotest_common.sh@1360 -- # echo 952320 00:27:16.893 952320 00:27:16.893 06:53:20 -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 952320 00:27:16.893 06:53:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:16.893 06:53:20 -- common/autotest_common.sh@10 -- # set +x 00:27:16.893 332fc85a-2ec0-4311-b5ec-63c00618852f 00:27:16.893 06:53:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:16.893 06:53:21 -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:27:16.893 06:53:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:16.893 06:53:21 -- common/autotest_common.sh@10 -- # set +x 00:27:16.893 06:53:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:16.893 06:53:21 -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:27:16.893 06:53:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:16.893 06:53:21 -- common/autotest_common.sh@10 -- # set +x 00:27:16.893 06:53:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:16.893 06:53:21 -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:16.893 06:53:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:16.893 06:53:21 -- common/autotest_common.sh@10 -- # set +x 00:27:16.893 06:53:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:16.893 06:53:21 -- host/fio.sh@57 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:16.893 06:53:21 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:16.893 06:53:21 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:16.893 06:53:21 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:16.893 06:53:21 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:16.893 06:53:21 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:16.893 06:53:21 -- common/autotest_common.sh@1327 -- # shift 00:27:16.893 06:53:21 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:16.893 06:53:21 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:16.893 06:53:21 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:16.893 06:53:21 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:16.893 06:53:21 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:16.893 06:53:21 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:16.893 06:53:21 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:16.893 06:53:21 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:16.893 06:53:21 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:16.893 06:53:21 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:16.893 06:53:21 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:16.893 06:53:21 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:16.893 06:53:21 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:16.893 06:53:21 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:16.893 06:53:21 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:16.893 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:16.893 fio-3.35 00:27:16.893 Starting 1 thread 00:27:16.893 EAL: No free 2048 kB hugepages reported on node 1 00:27:19.419 00:27:19.419 test: (groupid=0, jobs=1): err= 0: pid=84301: Wed Apr 17 06:53:23 2024 00:27:19.419 read: IOPS=5321, BW=20.8MiB/s (21.8MB/s)(41.7MiB/2008msec) 00:27:19.419 slat (nsec): min=1933, max=293243, avg=2426.63, stdev=3965.47 00:27:19.419 clat (usec): min=1129, max=172450, avg=13285.83, stdev=12203.02 00:27:19.419 lat (usec): min=1134, max=172505, avg=13288.25, stdev=12203.78 00:27:19.419 clat percentiles (msec): 00:27:19.419 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 12], 00:27:19.419 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 13], 00:27:19.419 | 70.00th=[ 13], 80.00th=[ 14], 90.00th=[ 14], 95.00th=[ 15], 00:27:19.419 | 99.00th=[ 16], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 174], 00:27:19.419 | 99.99th=[ 174] 00:27:19.419 bw ( KiB/s): min=15224, max=23328, per=99.64%, avg=21208.00, stdev=3990.54, samples=4 00:27:19.419 iops : min= 3806, max= 5832, avg=5302.00, stdev=997.64, samples=4 00:27:19.419 write: IOPS=5304, BW=20.7MiB/s (21.7MB/s)(41.6MiB/2008msec); 0 zone resets 00:27:19.419 slat (usec): min=2, max=224, avg= 2.54, stdev= 2.45 00:27:19.419 clat (usec): min=470, max=169008, avg=10615.69, stdev=11485.23 00:27:19.419 lat (usec): min=474, max=169022, avg=10618.23, stdev=11485.99 00:27:19.419 clat percentiles (msec): 00:27:19.419 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:27:19.419 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 10], 60.00th=[ 11], 00:27:19.419 | 70.00th=[ 11], 80.00th=[ 11], 90.00th=[ 11], 95.00th=[ 12], 00:27:19.419 | 99.00th=[ 13], 99.50th=[ 155], 99.90th=[ 169], 99.95th=[ 169], 00:27:19.419 | 99.99th=[ 169] 00:27:19.419 bw ( KiB/s): min=16040, max=23296, per=99.97%, avg=21210.00, stdev=3458.66, samples=4 00:27:19.419 iops : min= 4010, max= 5824, avg=5302.50, stdev=864.67, samples=4 00:27:19.419 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:27:19.419 lat (msec) : 2=0.03%, 4=0.09%, 10=30.88%, 20=68.34%, 50=0.04% 00:27:19.419 lat (msec) : 250=0.60% 00:27:19.419 cpu : usr=54.31%, sys=41.41%, ctx=77, majf=0, minf=19 00:27:19.419 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:27:19.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:19.419 issued rwts: total=10685,10651,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:19.419 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:19.419 00:27:19.419 Run status group 0 (all jobs): 00:27:19.419 READ: bw=20.8MiB/s (21.8MB/s), 20.8MiB/s-20.8MiB/s (21.8MB/s-21.8MB/s), io=41.7MiB (43.8MB), run=2008-2008msec 00:27:19.419 WRITE: bw=20.7MiB/s (21.7MB/s), 20.7MiB/s-20.7MiB/s (21.7MB/s-21.7MB/s), io=41.6MiB (43.6MB), run=2008-2008msec 00:27:19.420 06:53:23 -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:19.420 06:53:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:19.420 06:53:23 -- common/autotest_common.sh@10 -- # set +x 00:27:19.420 06:53:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:19.420 06:53:23 -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:27:19.420 06:53:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:19.420 06:53:23 -- common/autotest_common.sh@10 -- # set +x 00:27:20.351 06:53:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:20.351 06:53:24 -- host/fio.sh@62 -- # ls_nested_guid=52ba3c3c-32b8-45e2-a541-67cbb07f24d7 00:27:20.351 06:53:24 -- host/fio.sh@63 -- # get_lvs_free_mb 52ba3c3c-32b8-45e2-a541-67cbb07f24d7 00:27:20.351 06:53:24 -- common/autotest_common.sh@1350 -- # local lvs_uuid=52ba3c3c-32b8-45e2-a541-67cbb07f24d7 00:27:20.351 06:53:24 -- common/autotest_common.sh@1351 -- # local lvs_info 00:27:20.351 06:53:24 -- common/autotest_common.sh@1352 -- # local fc 00:27:20.351 06:53:24 -- common/autotest_common.sh@1353 -- # local cs 00:27:20.351 06:53:24 -- common/autotest_common.sh@1354 -- # rpc_cmd bdev_lvol_get_lvstores 00:27:20.351 06:53:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:20.351 06:53:24 -- common/autotest_common.sh@10 -- # set +x 00:27:20.351 06:53:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:20.351 06:53:24 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:27:20.351 { 00:27:20.351 "uuid": "6faa58f4-f6b2-49c8-91b9-2400636f30e0", 00:27:20.351 "name": "lvs_0", 00:27:20.351 "base_bdev": "Nvme0n1", 00:27:20.351 "total_data_clusters": 930, 00:27:20.351 "free_clusters": 0, 00:27:20.351 "block_size": 512, 00:27:20.351 "cluster_size": 1073741824 00:27:20.351 }, 00:27:20.351 { 00:27:20.351 "uuid": "52ba3c3c-32b8-45e2-a541-67cbb07f24d7", 00:27:20.351 "name": "lvs_n_0", 00:27:20.352 "base_bdev": "332fc85a-2ec0-4311-b5ec-63c00618852f", 00:27:20.352 "total_data_clusters": 237847, 00:27:20.352 "free_clusters": 237847, 00:27:20.352 "block_size": 512, 00:27:20.352 "cluster_size": 4194304 00:27:20.352 } 00:27:20.352 ]' 00:27:20.352 06:53:24 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="52ba3c3c-32b8-45e2-a541-67cbb07f24d7") .free_clusters' 00:27:20.352 06:53:24 -- common/autotest_common.sh@1355 -- # fc=237847 00:27:20.352 06:53:24 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="52ba3c3c-32b8-45e2-a541-67cbb07f24d7") .cluster_size' 00:27:20.352 06:53:24 -- common/autotest_common.sh@1356 -- # cs=4194304 00:27:20.352 06:53:24 -- common/autotest_common.sh@1359 -- # free_mb=951388 00:27:20.352 06:53:24 -- common/autotest_common.sh@1360 -- # echo 951388 00:27:20.352 951388 00:27:20.352 06:53:24 -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:27:20.352 06:53:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:20.352 06:53:24 -- common/autotest_common.sh@10 -- # set +x 00:27:20.609 115d3341-8778-41b1-bea9-c4046eacf086 00:27:20.609 06:53:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:20.609 06:53:25 -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:27:20.609 06:53:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:20.609 06:53:25 -- common/autotest_common.sh@10 -- # set +x 00:27:20.609 06:53:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:20.609 06:53:25 -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:27:20.609 06:53:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:20.609 06:53:25 -- common/autotest_common.sh@10 -- # set +x 00:27:20.609 06:53:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:20.609 06:53:25 -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:27:20.609 06:53:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:20.609 06:53:25 -- common/autotest_common.sh@10 -- # set +x 00:27:20.867 06:53:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:20.867 06:53:25 -- host/fio.sh@68 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:20.867 06:53:25 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:20.867 06:53:25 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:20.867 06:53:25 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:20.867 06:53:25 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:20.867 06:53:25 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:20.867 06:53:25 -- common/autotest_common.sh@1327 -- # shift 00:27:20.867 06:53:25 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:20.867 06:53:25 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:20.867 06:53:25 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:20.867 06:53:25 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:20.867 06:53:25 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:20.867 06:53:25 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:20.867 06:53:25 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:20.867 06:53:25 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:20.867 06:53:25 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:20.867 06:53:25 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:20.867 06:53:25 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:20.867 06:53:25 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:20.867 06:53:25 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:20.867 06:53:25 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:20.867 06:53:25 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:20.867 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:20.867 fio-3.35 00:27:20.867 Starting 1 thread 00:27:21.124 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.652 00:27:23.652 test: (groupid=0, jobs=1): err= 0: pid=84791: Wed Apr 17 06:53:27 2024 00:27:23.652 read: IOPS=5720, BW=22.3MiB/s (23.4MB/s)(44.9MiB/2009msec) 00:27:23.652 slat (usec): min=2, max=155, avg= 2.55, stdev= 2.34 00:27:23.652 clat (usec): min=4658, max=21571, avg=12332.28, stdev=1095.83 00:27:23.652 lat (usec): min=4663, max=21574, avg=12334.83, stdev=1095.78 00:27:23.652 clat percentiles (usec): 00:27:23.652 | 1.00th=[ 9896], 5.00th=[10683], 10.00th=[11076], 20.00th=[11469], 00:27:23.652 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12387], 60.00th=[12518], 00:27:23.652 | 70.00th=[12780], 80.00th=[13173], 90.00th=[13698], 95.00th=[14091], 00:27:23.652 | 99.00th=[14746], 99.50th=[15270], 99.90th=[18744], 99.95th=[20317], 00:27:23.652 | 99.99th=[20317] 00:27:23.652 bw ( KiB/s): min=20752, max=23696, per=99.91%, avg=22860.00, stdev=1409.04, samples=4 00:27:23.652 iops : min= 5188, max= 5924, avg=5715.00, stdev=352.26, samples=4 00:27:23.652 write: IOPS=5708, BW=22.3MiB/s (23.4MB/s)(44.8MiB/2009msec); 0 zone resets 00:27:23.652 slat (usec): min=2, max=112, avg= 2.64, stdev= 1.54 00:27:23.652 clat (usec): min=2279, max=19043, avg=9849.87, stdev=988.26 00:27:23.652 lat (usec): min=2285, max=19045, avg=9852.52, stdev=988.24 00:27:23.652 clat percentiles (usec): 00:27:23.652 | 1.00th=[ 7570], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9110], 00:27:23.652 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10028], 00:27:23.652 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10945], 95.00th=[11338], 00:27:23.652 | 99.00th=[11994], 99.50th=[12518], 99.90th=[16909], 99.95th=[17957], 00:27:23.652 | 99.99th=[19006] 00:27:23.652 bw ( KiB/s): min=21720, max=23424, per=99.88%, avg=22806.00, stdev=753.57, samples=4 00:27:23.652 iops : min= 5430, max= 5856, avg=5701.50, stdev=188.39, samples=4 00:27:23.652 lat (msec) : 4=0.05%, 10=29.17%, 20=70.74%, 50=0.04% 00:27:23.652 cpu : usr=54.78%, sys=41.14%, ctx=112, majf=0, minf=19 00:27:23.652 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:27:23.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:23.652 issued rwts: total=11492,11468,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:23.652 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:23.652 00:27:23.652 Run status group 0 (all jobs): 00:27:23.652 READ: bw=22.3MiB/s (23.4MB/s), 22.3MiB/s-22.3MiB/s (23.4MB/s-23.4MB/s), io=44.9MiB (47.1MB), run=2009-2009msec 00:27:23.652 WRITE: bw=22.3MiB/s (23.4MB/s), 22.3MiB/s-22.3MiB/s (23.4MB/s-23.4MB/s), io=44.8MiB (47.0MB), run=2009-2009msec 00:27:23.652 06:53:27 -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:23.652 06:53:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.652 06:53:27 -- common/autotest_common.sh@10 -- # set +x 00:27:23.652 06:53:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.652 06:53:27 -- host/fio.sh@72 -- # sync 00:27:23.652 06:53:27 -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:27:23.652 06:53:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.652 06:53:27 -- common/autotest_common.sh@10 -- # set +x 00:27:26.940 06:53:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.940 06:53:31 -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:27:26.940 06:53:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:26.940 06:53:31 -- common/autotest_common.sh@10 -- # set +x 00:27:26.940 06:53:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.940 06:53:31 -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:27:26.940 06:53:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:26.940 06:53:31 -- common/autotest_common.sh@10 -- # set +x 00:27:29.495 06:53:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:29.495 06:53:34 -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:27:29.495 06:53:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:29.495 06:53:34 -- common/autotest_common.sh@10 -- # set +x 00:27:29.495 06:53:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:29.495 06:53:34 -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:27:29.495 06:53:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:29.495 06:53:34 -- common/autotest_common.sh@10 -- # set +x 00:27:31.394 06:53:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.394 06:53:35 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:27:31.394 06:53:35 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:27:31.394 06:53:35 -- host/fio.sh@84 -- # nvmftestfini 00:27:31.394 06:53:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:31.394 06:53:35 -- nvmf/common.sh@117 -- # sync 00:27:31.394 06:53:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:31.394 06:53:35 -- nvmf/common.sh@120 -- # set +e 00:27:31.394 06:53:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:31.394 06:53:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:31.394 rmmod nvme_tcp 00:27:31.394 rmmod nvme_fabrics 00:27:31.394 rmmod nvme_keyring 00:27:31.394 06:53:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:31.394 06:53:35 -- nvmf/common.sh@124 -- # set -e 00:27:31.394 06:53:35 -- nvmf/common.sh@125 -- # return 0 00:27:31.394 06:53:35 -- nvmf/common.sh@478 -- # '[' -n 82638 ']' 00:27:31.394 06:53:35 -- nvmf/common.sh@479 -- # killprocess 82638 00:27:31.394 06:53:35 -- common/autotest_common.sh@936 -- # '[' -z 82638 ']' 00:27:31.394 06:53:35 -- common/autotest_common.sh@940 -- # kill -0 82638 00:27:31.394 06:53:35 -- common/autotest_common.sh@941 -- # uname 00:27:31.394 06:53:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:31.394 06:53:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82638 00:27:31.394 06:53:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:31.394 06:53:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:31.394 06:53:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82638' 00:27:31.394 killing process with pid 82638 00:27:31.394 06:53:35 -- common/autotest_common.sh@955 -- # kill 82638 00:27:31.394 06:53:35 -- common/autotest_common.sh@960 -- # wait 82638 00:27:31.653 06:53:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:31.653 06:53:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:31.653 06:53:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:31.653 06:53:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:31.653 06:53:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:31.653 06:53:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.653 06:53:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:31.654 06:53:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.557 06:53:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:33.557 00:27:33.557 real 0m30.273s 00:27:33.557 user 1m48.962s 00:27:33.557 sys 0m6.219s 00:27:33.557 06:53:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:33.557 06:53:38 -- common/autotest_common.sh@10 -- # set +x 00:27:33.557 ************************************ 00:27:33.557 END TEST nvmf_fio_host 00:27:33.557 ************************************ 00:27:33.557 06:53:38 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:33.557 06:53:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:33.557 06:53:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:33.557 06:53:38 -- common/autotest_common.sh@10 -- # set +x 00:27:33.815 ************************************ 00:27:33.815 START TEST nvmf_failover 00:27:33.815 ************************************ 00:27:33.815 06:53:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:33.815 * Looking for test storage... 00:27:33.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:33.815 06:53:38 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:33.815 06:53:38 -- nvmf/common.sh@7 -- # uname -s 00:27:33.815 06:53:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:33.815 06:53:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:33.815 06:53:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:33.815 06:53:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:33.815 06:53:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:33.815 06:53:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:33.815 06:53:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:33.815 06:53:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:33.815 06:53:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:33.815 06:53:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:33.815 06:53:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:33.815 06:53:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:33.815 06:53:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:33.815 06:53:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:33.815 06:53:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:33.815 06:53:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:33.815 06:53:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:33.815 06:53:38 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:33.815 06:53:38 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:33.815 06:53:38 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:33.815 06:53:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.815 06:53:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.815 06:53:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.815 06:53:38 -- paths/export.sh@5 -- # export PATH 00:27:33.816 06:53:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.816 06:53:38 -- nvmf/common.sh@47 -- # : 0 00:27:33.816 06:53:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:33.816 06:53:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:33.816 06:53:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:33.816 06:53:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:33.816 06:53:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:33.816 06:53:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:33.816 06:53:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:33.816 06:53:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:33.816 06:53:38 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:33.816 06:53:38 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:33.816 06:53:38 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:33.816 06:53:38 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:33.816 06:53:38 -- host/failover.sh@18 -- # nvmftestinit 00:27:33.816 06:53:38 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:33.816 06:53:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:33.816 06:53:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:33.816 06:53:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:33.816 06:53:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:33.816 06:53:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.816 06:53:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:33.816 06:53:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.816 06:53:38 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:27:33.816 06:53:38 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:27:33.816 06:53:38 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:33.816 06:53:38 -- common/autotest_common.sh@10 -- # set +x 00:27:35.716 06:53:40 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:35.716 06:53:40 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:35.716 06:53:40 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:35.716 06:53:40 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:35.716 06:53:40 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:35.716 06:53:40 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:35.716 06:53:40 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:35.716 06:53:40 -- nvmf/common.sh@295 -- # net_devs=() 00:27:35.716 06:53:40 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:35.716 06:53:40 -- nvmf/common.sh@296 -- # e810=() 00:27:35.716 06:53:40 -- nvmf/common.sh@296 -- # local -ga e810 00:27:35.716 06:53:40 -- nvmf/common.sh@297 -- # x722=() 00:27:35.716 06:53:40 -- nvmf/common.sh@297 -- # local -ga x722 00:27:35.716 06:53:40 -- nvmf/common.sh@298 -- # mlx=() 00:27:35.716 06:53:40 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:35.716 06:53:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:35.716 06:53:40 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:35.716 06:53:40 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:35.716 06:53:40 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:35.716 06:53:40 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:35.716 06:53:40 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:35.716 06:53:40 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:35.716 06:53:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:35.716 06:53:40 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:35.716 06:53:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:35.716 06:53:40 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:35.716 06:53:40 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:35.716 06:53:40 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:35.716 06:53:40 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:35.716 06:53:40 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:35.716 06:53:40 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:35.716 06:53:40 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:35.716 06:53:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:35.716 06:53:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:35.716 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:35.716 06:53:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:35.716 06:53:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:35.716 06:53:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:35.716 06:53:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:35.716 06:53:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:35.716 06:53:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:35.716 06:53:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:35.716 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:35.716 06:53:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:35.716 06:53:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:35.716 06:53:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:35.716 06:53:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:35.716 06:53:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:35.716 06:53:40 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:35.716 06:53:40 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:35.716 06:53:40 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:35.716 06:53:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:35.716 06:53:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:35.716 06:53:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:35.717 06:53:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:35.717 06:53:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:35.717 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:35.717 06:53:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:35.717 06:53:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:35.717 06:53:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:35.717 06:53:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:35.717 06:53:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:35.717 06:53:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:35.717 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:35.717 06:53:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:35.717 06:53:40 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:35.717 06:53:40 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:35.717 06:53:40 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:35.717 06:53:40 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:35.717 06:53:40 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:35.717 06:53:40 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:35.717 06:53:40 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:35.717 06:53:40 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:35.717 06:53:40 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:35.717 06:53:40 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:35.717 06:53:40 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:35.717 06:53:40 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:35.717 06:53:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:35.717 06:53:40 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:35.717 06:53:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:35.717 06:53:40 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:35.717 06:53:40 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:35.717 06:53:40 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:35.717 06:53:40 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:35.717 06:53:40 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:35.717 06:53:40 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:35.717 06:53:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:35.717 06:53:40 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:35.717 06:53:40 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:35.717 06:53:40 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:35.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:35.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:27:35.717 00:27:35.717 --- 10.0.0.2 ping statistics --- 00:27:35.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:35.717 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:27:35.717 06:53:40 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:35.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:35.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:27:35.717 00:27:35.717 --- 10.0.0.1 ping statistics --- 00:27:35.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:35.717 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:27:35.717 06:53:40 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:35.717 06:53:40 -- nvmf/common.sh@411 -- # return 0 00:27:35.717 06:53:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:35.717 06:53:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:35.717 06:53:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:35.717 06:53:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:35.717 06:53:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:35.717 06:53:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:35.717 06:53:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:35.717 06:53:40 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:35.717 06:53:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:35.717 06:53:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:35.717 06:53:40 -- common/autotest_common.sh@10 -- # set +x 00:27:35.717 06:53:40 -- nvmf/common.sh@470 -- # nvmfpid=87910 00:27:35.717 06:53:40 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:35.717 06:53:40 -- nvmf/common.sh@471 -- # waitforlisten 87910 00:27:35.717 06:53:40 -- common/autotest_common.sh@817 -- # '[' -z 87910 ']' 00:27:35.717 06:53:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:35.717 06:53:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:35.717 06:53:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:35.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:35.717 06:53:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:35.717 06:53:40 -- common/autotest_common.sh@10 -- # set +x 00:27:35.975 [2024-04-17 06:53:40.332331] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:27:35.975 [2024-04-17 06:53:40.332408] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:35.975 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.975 [2024-04-17 06:53:40.406615] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:35.975 [2024-04-17 06:53:40.496204] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:35.975 [2024-04-17 06:53:40.496264] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:35.975 [2024-04-17 06:53:40.496288] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:35.975 [2024-04-17 06:53:40.496301] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:35.975 [2024-04-17 06:53:40.496313] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:35.975 [2024-04-17 06:53:40.496392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:35.975 [2024-04-17 06:53:40.496437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:35.975 [2024-04-17 06:53:40.496439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.232 06:53:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:36.232 06:53:40 -- common/autotest_common.sh@850 -- # return 0 00:27:36.232 06:53:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:36.232 06:53:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:36.232 06:53:40 -- common/autotest_common.sh@10 -- # set +x 00:27:36.232 06:53:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:36.232 06:53:40 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:36.490 [2024-04-17 06:53:40.849906] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:36.490 06:53:40 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:36.771 Malloc0 00:27:36.771 06:53:41 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:37.028 06:53:41 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:37.028 06:53:41 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:37.285 [2024-04-17 06:53:41.866198] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:37.285 06:53:41 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:37.543 [2024-04-17 06:53:42.098805] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:37.543 06:53:42 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:37.801 [2024-04-17 06:53:42.335568] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:37.801 06:53:42 -- host/failover.sh@31 -- # bdevperf_pid=88195 00:27:37.801 06:53:42 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:37.801 06:53:42 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:37.801 06:53:42 -- host/failover.sh@34 -- # waitforlisten 88195 /var/tmp/bdevperf.sock 00:27:37.801 06:53:42 -- common/autotest_common.sh@817 -- # '[' -z 88195 ']' 00:27:37.801 06:53:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:37.801 06:53:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:37.801 06:53:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:37.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:37.801 06:53:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:37.801 06:53:42 -- common/autotest_common.sh@10 -- # set +x 00:27:38.058 06:53:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:38.058 06:53:42 -- common/autotest_common.sh@850 -- # return 0 00:27:38.058 06:53:42 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:38.623 NVMe0n1 00:27:38.623 06:53:43 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:38.881 00:27:38.881 06:53:43 -- host/failover.sh@39 -- # run_test_pid=88332 00:27:38.881 06:53:43 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:38.881 06:53:43 -- host/failover.sh@41 -- # sleep 1 00:27:39.827 06:53:44 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:40.094 [2024-04-17 06:53:44.630136] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.094 [2024-04-17 06:53:44.630223] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.094 [2024-04-17 06:53:44.630250] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630262] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630275] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630287] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630299] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630312] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630323] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630347] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630359] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630371] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630384] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630396] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630408] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630420] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630442] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630455] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630467] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630506] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630517] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630529] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630541] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630552] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630563] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630575] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630587] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630598] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630610] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630632] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630644] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630656] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630668] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630679] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630691] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630703] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630714] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630726] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630749] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630761] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630776] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630788] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630800] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630812] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630824] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630836] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630847] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630859] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630882] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630894] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630906] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630917] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630929] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630940] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630952] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630963] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630975] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 [2024-04-17 06:53:44.630992] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71eb0 is same with the state(5) to be set 00:27:40.095 06:53:44 -- host/failover.sh@45 -- # sleep 3 00:27:43.376 06:53:47 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:43.635 00:27:43.635 06:53:48 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:43.940 [2024-04-17 06:53:48.276471] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276540] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276555] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276567] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276598] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276611] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276623] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276634] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276645] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276656] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276668] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276679] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276691] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276702] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276714] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276725] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276736] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276747] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276761] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276773] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276784] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276834] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276846] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276858] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276882] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276904] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276931] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276944] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276958] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276970] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276981] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.276992] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 [2024-04-17 06:53:48.277004] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a72360 is same with the state(5) to be set 00:27:43.940 06:53:48 -- host/failover.sh@50 -- # sleep 3 00:27:47.228 06:53:51 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:47.228 [2024-04-17 06:53:51.531846] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:47.228 06:53:51 -- host/failover.sh@55 -- # sleep 1 00:27:48.162 06:53:52 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:48.421 [2024-04-17 06:53:52.773136] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.421 [2024-04-17 06:53:52.773210] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.421 [2024-04-17 06:53:52.773235] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.421 [2024-04-17 06:53:52.773249] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.421 [2024-04-17 06:53:52.773262] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.421 [2024-04-17 06:53:52.773274] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.421 [2024-04-17 06:53:52.773286] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.421 [2024-04-17 06:53:52.773298] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.421 [2024-04-17 06:53:52.773311] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.421 [2024-04-17 06:53:52.773323] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.421 [2024-04-17 06:53:52.773335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.421 [2024-04-17 06:53:52.773348] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.421 [2024-04-17 06:53:52.773361] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.421 [2024-04-17 06:53:52.773373] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.421 [2024-04-17 06:53:52.773386] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.421 [2024-04-17 06:53:52.773408] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.421 [2024-04-17 06:53:52.773421] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.421 [2024-04-17 06:53:52.773434] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.421 [2024-04-17 06:53:52.773447] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.421 [2024-04-17 06:53:52.773475] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.421 [2024-04-17 06:53:52.773488] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.421 [2024-04-17 06:53:52.773501] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.421 [2024-04-17 06:53:52.773513] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.421 [2024-04-17 06:53:52.773526] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.421 [2024-04-17 06:53:52.773538] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.421 [2024-04-17 06:53:52.773550] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.421 [2024-04-17 06:53:52.773561] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.421 [2024-04-17 06:53:52.773573] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.421 [2024-04-17 06:53:52.773599] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.421 [2024-04-17 06:53:52.773611] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.422 [2024-04-17 06:53:52.773622] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.422 [2024-04-17 06:53:52.773633] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.422 [2024-04-17 06:53:52.773644] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.422 [2024-04-17 06:53:52.773655] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.422 [2024-04-17 06:53:52.773667] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.422 [2024-04-17 06:53:52.773678] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.422 [2024-04-17 06:53:52.773689] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.422 [2024-04-17 06:53:52.773701] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.422 [2024-04-17 06:53:52.773712] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.422 [2024-04-17 06:53:52.773723] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.422 [2024-04-17 06:53:52.773735] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.422 [2024-04-17 06:53:52.773746] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.422 [2024-04-17 06:53:52.773761] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.422 [2024-04-17 06:53:52.773773] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.422 [2024-04-17 06:53:52.773785] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.422 [2024-04-17 06:53:52.773796] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.422 [2024-04-17 06:53:52.773808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818130 is same with the state(5) to be set 00:27:48.422 06:53:52 -- host/failover.sh@59 -- # wait 88332 00:27:54.985 0 00:27:54.985 06:53:58 -- host/failover.sh@61 -- # killprocess 88195 00:27:54.985 06:53:58 -- common/autotest_common.sh@936 -- # '[' -z 88195 ']' 00:27:54.985 06:53:58 -- common/autotest_common.sh@940 -- # kill -0 88195 00:27:54.985 06:53:58 -- common/autotest_common.sh@941 -- # uname 00:27:54.985 06:53:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:54.985 06:53:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88195 00:27:54.985 06:53:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:54.985 06:53:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:54.985 06:53:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88195' 00:27:54.985 killing process with pid 88195 00:27:54.986 06:53:58 -- common/autotest_common.sh@955 -- # kill 88195 00:27:54.986 06:53:58 -- common/autotest_common.sh@960 -- # wait 88195 00:27:54.986 06:53:58 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:54.986 [2024-04-17 06:53:42.398066] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:27:54.986 [2024-04-17 06:53:42.398158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88195 ] 00:27:54.986 EAL: No free 2048 kB hugepages reported on node 1 00:27:54.986 [2024-04-17 06:53:42.458055] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.986 [2024-04-17 06:53:42.541762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.986 Running I/O for 15 seconds... 00:27:54.986 [2024-04-17 06:53:44.631348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.631388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.631416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.631432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.631448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.631463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.631478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.631507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.631522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.631535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.631550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.631563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.631592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.631606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.631620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.631633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.631647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.631660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.631673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.631686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.631700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.631713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.631734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.631747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.631761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.631773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.631787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.631799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.631813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.631825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.631839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.631851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.631865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.631883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.631897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.631909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.631923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.631935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.631948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.631960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.631974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.631987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.632000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.632012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.632026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.632038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.632052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.632068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.632083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.632096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.632109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.632122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.632136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.632148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.632162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.632182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.632216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.632230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.632244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.632258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.632273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.632286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.632300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.632314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.632329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.632344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.632360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.632375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.986 [2024-04-17 06:53:44.632391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.986 [2024-04-17 06:53:44.632405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.632420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.632433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.632451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.632465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.632480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.632507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.632521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.632534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.632548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.632560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.632575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.632587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.632601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.632613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.632628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.632641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.632655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.632668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.632682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.632694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.632708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.632721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.632735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.632747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.632762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.632774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.632788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.632804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.632819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.632832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.632846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.632859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.632873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.632887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.632901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.632914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.632928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.632940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.632954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.632966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.632981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.632993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.633007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.633019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.633033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.633046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.633060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.633072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.633086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.633098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.633112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.633124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.633138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.633158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.633173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.633208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.633224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.633237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.633252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.633265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.633279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.633292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.633307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.633319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.633334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.633346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.633366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.633380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.633394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.633407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.633421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-04-17 06:53:44.633435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.987 [2024-04-17 06:53:44.633449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-04-17 06:53:44.633461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.633476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-04-17 06:53:44.633504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.633518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-04-17 06:53:44.633531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.633548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-04-17 06:53:44.633561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.633575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-04-17 06:53:44.633587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.633601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-04-17 06:53:44.633613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.633627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-04-17 06:53:44.633639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.633652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-04-17 06:53:44.633665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.633678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-04-17 06:53:44.633690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.633704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-04-17 06:53:44.633717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.633731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-04-17 06:53:44.633744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.633757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-04-17 06:53:44.633770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.633784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-04-17 06:53:44.633796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.633815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-04-17 06:53:44.633829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.633843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-04-17 06:53:44.633855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.633869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-04-17 06:53:44.633884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.633899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.988 [2024-04-17 06:53:44.633912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.633926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.988 [2024-04-17 06:53:44.633939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.633953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.988 [2024-04-17 06:53:44.633965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.633979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.988 [2024-04-17 06:53:44.633991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.634005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.988 [2024-04-17 06:53:44.634018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.634031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.988 [2024-04-17 06:53:44.634044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.634058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.988 [2024-04-17 06:53:44.634070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.634084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.988 [2024-04-17 06:53:44.634097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.634110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.988 [2024-04-17 06:53:44.634122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.634136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.988 [2024-04-17 06:53:44.634152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.634166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.988 [2024-04-17 06:53:44.634201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.634218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.988 [2024-04-17 06:53:44.634231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.634250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.988 [2024-04-17 06:53:44.634263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.634278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.988 [2024-04-17 06:53:44.634291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.634305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.988 [2024-04-17 06:53:44.634318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.634332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.988 [2024-04-17 06:53:44.634345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.634359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.988 [2024-04-17 06:53:44.634371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.634386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.988 [2024-04-17 06:53:44.634399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.634413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.988 [2024-04-17 06:53:44.634425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.634440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.988 [2024-04-17 06:53:44.634460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.634474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.988 [2024-04-17 06:53:44.634502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.634516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.988 [2024-04-17 06:53:44.634529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.988 [2024-04-17 06:53:44.634542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.989 [2024-04-17 06:53:44.634554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:44.634568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.989 [2024-04-17 06:53:44.634581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:44.634594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.989 [2024-04-17 06:53:44.634607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:44.634623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.989 [2024-04-17 06:53:44.634641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:44.634655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.989 [2024-04-17 06:53:44.634668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:44.634681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.989 [2024-04-17 06:53:44.634693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:44.634708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.989 [2024-04-17 06:53:44.634721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:44.634734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.989 [2024-04-17 06:53:44.634747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:44.634760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.989 [2024-04-17 06:53:44.634773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:44.634787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.989 [2024-04-17 06:53:44.634799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:44.634813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.989 [2024-04-17 06:53:44.634825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:44.634839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.989 [2024-04-17 06:53:44.634851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:44.634865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.989 [2024-04-17 06:53:44.634877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:44.634890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.989 [2024-04-17 06:53:44.634902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:44.634932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.989 [2024-04-17 06:53:44.634945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:44.634959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.989 [2024-04-17 06:53:44.634975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:44.634989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.989 [2024-04-17 06:53:44.635003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:44.635018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.989 [2024-04-17 06:53:44.635030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:44.635043] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72fce0 is same with the state(5) to be set 00:27:54.989 [2024-04-17 06:53:44.635058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:54.989 [2024-04-17 06:53:44.635069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:54.989 [2024-04-17 06:53:44.635084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75904 len:8 PRP1 0x0 PRP2 0x0 00:27:54.989 [2024-04-17 06:53:44.635097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:44.635155] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x72fce0 was disconnected and freed. reset controller. 00:27:54.989 [2024-04-17 06:53:44.635173] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:54.989 [2024-04-17 06:53:44.635237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.989 [2024-04-17 06:53:44.635257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:44.635272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.989 [2024-04-17 06:53:44.635284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:44.635297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.989 [2024-04-17 06:53:44.635310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:44.635324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.989 [2024-04-17 06:53:44.635336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:44.635349] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:54.989 [2024-04-17 06:53:44.635410] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x710e60 (9): Bad file descriptor 00:27:54.989 [2024-04-17 06:53:44.638683] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:54.989 [2024-04-17 06:53:44.720828] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:54.989 [2024-04-17 06:53:48.277187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.989 [2024-04-17 06:53:48.277233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:48.277261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.989 [2024-04-17 06:53:48.277283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:48.277300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.989 [2024-04-17 06:53:48.277314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:48.277328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.989 [2024-04-17 06:53:48.277342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:48.277357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.989 [2024-04-17 06:53:48.277370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:48.277384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.989 [2024-04-17 06:53:48.277397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:48.277412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.989 [2024-04-17 06:53:48.277439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:48.277454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.989 [2024-04-17 06:53:48.277467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:48.277496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.989 [2024-04-17 06:53:48.277509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:48.277523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.989 [2024-04-17 06:53:48.277536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:48.277550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.989 [2024-04-17 06:53:48.277562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:48.277576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.989 [2024-04-17 06:53:48.277588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.989 [2024-04-17 06:53:48.277602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.990 [2024-04-17 06:53:48.277615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.990 [2024-04-17 06:53:48.277628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.990 [2024-04-17 06:53:48.277641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.990 [2024-04-17 06:53:48.277659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.990 [2024-04-17 06:53:48.277671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.990 [2024-04-17 06:53:48.277686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.990 [2024-04-17 06:53:48.277698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.990 [2024-04-17 06:53:48.277712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.990 [2024-04-17 06:53:48.277725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.990 [2024-04-17 06:53:48.277739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.990 [2024-04-17 06:53:48.277751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.990 [2024-04-17 06:53:48.277767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.990 [2024-04-17 06:53:48.277780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.990 [2024-04-17 06:53:48.277793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.990 [2024-04-17 06:53:48.277806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.990 [2024-04-17 06:53:48.277820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.990 [2024-04-17 06:53:48.277832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.990 [2024-04-17 06:53:48.277846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.990 [2024-04-17 06:53:48.277858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.990 [2024-04-17 06:53:48.277872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.990 [2024-04-17 06:53:48.277884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.990 [2024-04-17 06:53:48.277898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.990 [2024-04-17 06:53:48.277910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.990 [2024-04-17 06:53:48.277924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.990 [2024-04-17 06:53:48.277936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.990 [2024-04-17 06:53:48.277950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.990 [2024-04-17 06:53:48.277962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.990 [2024-04-17 06:53:48.277976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.990 [2024-04-17 06:53:48.277992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.990 [2024-04-17 06:53:48.278006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.990 [2024-04-17 06:53:48.278019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.990 [2024-04-17 06:53:48.278033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.990 [2024-04-17 06:53:48.278046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.990 [2024-04-17 06:53:48.278060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.990 [2024-04-17 06:53:48.278072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.990 [2024-04-17 06:53:48.278086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.990 [2024-04-17 06:53:48.278098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.990 [2024-04-17 06:53:48.278112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.990 [2024-04-17 06:53:48.278125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.990 [2024-04-17 06:53:48.278138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.990 [2024-04-17 06:53:48.278150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.990 [2024-04-17 06:53:48.278164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.990 [2024-04-17 06:53:48.278197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.990 [2024-04-17 06:53:48.278215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.990 [2024-04-17 06:53:48.278228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.990 [2024-04-17 06:53:48.278243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.990 [2024-04-17 06:53:48.278255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.990 [2024-04-17 06:53:48.278269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.990 [2024-04-17 06:53:48.278282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.990 [2024-04-17 06:53:48.278296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.990 [2024-04-17 06:53:48.278309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.990 [2024-04-17 06:53:48.278323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.990 [2024-04-17 06:53:48.278336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.990 [2024-04-17 06:53:48.278350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.990 [2024-04-17 06:53:48.278366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.990 [2024-04-17 06:53:48.278381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.278394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.278407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.278420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.278435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.278448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.278462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.278474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.278504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.278517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.278530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.278542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.278556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.278568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.278582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.278594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.278608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.278620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.278634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.278646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.278660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.278672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.278686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.278697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.278715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.278728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.278742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.278754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.278768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.278780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.278794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.278806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.278820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.278832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.278846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.278858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.278872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.278884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.278900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.278914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.278928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.278941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.278955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.278967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.278982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.278994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.279009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.279022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.279035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.279052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.279067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.279080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.279095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.279108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.279123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.279136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.279150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.279184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.279202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.279216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.279247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.279261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.279276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.279290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.279305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.279318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.279334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.279348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.279363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.279377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.279392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.279405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.279420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.279434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.279453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.279467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.991 [2024-04-17 06:53:48.279498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.991 [2024-04-17 06:53:48.279512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.279526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.279554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.279569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.279582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.279597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.279610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.279624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.279636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.279650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.279663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.279677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.279690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.279704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.279717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.279748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.279761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.279775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.279788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.279803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.279815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.279830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.279846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.279861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.279875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.279889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.279902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.279916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.279929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.279943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.279956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.279970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.279983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.279998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.280011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.280025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.280045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.280060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.280074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.280088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.280101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.280115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.280128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.280143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.280155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.280170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.280207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.280223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.280241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.280257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.280271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.280286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.280299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.280314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.280328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.280343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.280356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.280371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.280384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.280399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.280414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.280428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.280442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.280465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.280495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.280511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.280523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.280538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.280559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.280574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.280588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.280605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.280619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.280638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.280654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.280669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.280682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.280697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.992 [2024-04-17 06:53:48.280709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.992 [2024-04-17 06:53:48.280724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.993 [2024-04-17 06:53:48.280737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:48.280751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.993 [2024-04-17 06:53:48.280764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:48.280779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.993 [2024-04-17 06:53:48.280792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:48.280806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.993 [2024-04-17 06:53:48.280819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:48.280833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.993 [2024-04-17 06:53:48.280845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:48.280860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.993 [2024-04-17 06:53:48.280872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:48.280887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.993 [2024-04-17 06:53:48.280899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:48.280913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.993 [2024-04-17 06:53:48.280926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:48.280945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.993 [2024-04-17 06:53:48.280959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:48.280972] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x731c00 is same with the state(5) to be set 00:27:54.993 [2024-04-17 06:53:48.280995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:54.993 [2024-04-17 06:53:48.281006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:54.993 [2024-04-17 06:53:48.281023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78664 len:8 PRP1 0x0 PRP2 0x0 00:27:54.993 [2024-04-17 06:53:48.281035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:48.281097] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x731c00 was disconnected and freed. reset controller. 00:27:54.993 [2024-04-17 06:53:48.281116] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:27:54.993 [2024-04-17 06:53:48.281161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.993 [2024-04-17 06:53:48.281189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:48.281206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.993 [2024-04-17 06:53:48.281220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:48.281233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.993 [2024-04-17 06:53:48.281246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:48.281260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.993 [2024-04-17 06:53:48.281272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:48.281286] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:54.993 [2024-04-17 06:53:48.281339] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x710e60 (9): Bad file descriptor 00:27:54.993 [2024-04-17 06:53:48.284626] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:54.993 [2024-04-17 06:53:48.317280] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:54.993 [2024-04-17 06:53:52.774186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:129384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.993 [2024-04-17 06:53:52.774243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:52.774270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:130008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.993 [2024-04-17 06:53:52.774285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:52.774301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:130016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.993 [2024-04-17 06:53:52.774314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:52.774329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:130024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.993 [2024-04-17 06:53:52.774343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:52.774357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:130032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.993 [2024-04-17 06:53:52.774376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:52.774391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:130040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.993 [2024-04-17 06:53:52.774404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:52.774418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:130048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.993 [2024-04-17 06:53:52.774431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:52.774445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:130056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.993 [2024-04-17 06:53:52.774472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:52.774487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:130064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.993 [2024-04-17 06:53:52.774499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:52.774513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:130072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.993 [2024-04-17 06:53:52.774525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:52.774539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:130080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.993 [2024-04-17 06:53:52.774551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:52.774565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:130088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.993 [2024-04-17 06:53:52.774578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:52.774592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:129392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.993 [2024-04-17 06:53:52.774605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:52.774619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:129400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.993 [2024-04-17 06:53:52.774631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:52.774645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:129408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.993 [2024-04-17 06:53:52.774657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:52.774670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:129416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.993 [2024-04-17 06:53:52.774683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:52.774696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:129424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.993 [2024-04-17 06:53:52.774710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:52.774728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:129432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.993 [2024-04-17 06:53:52.774741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:52.774755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:129440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.993 [2024-04-17 06:53:52.774767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:52.774781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:129448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.993 [2024-04-17 06:53:52.774794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:52.774807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:129456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.993 [2024-04-17 06:53:52.774820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.993 [2024-04-17 06:53:52.774834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.993 [2024-04-17 06:53:52.774847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.774861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:129472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.774873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.774886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:129480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.774898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.774911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:129488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.774925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.774938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:129496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.774951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.774966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:129504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.774979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.774994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:130096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.994 [2024-04-17 06:53:52.775006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:130104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.994 [2024-04-17 06:53:52.775035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:130112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.994 [2024-04-17 06:53:52.775064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:130120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.994 [2024-04-17 06:53:52.775092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:130128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.994 [2024-04-17 06:53:52.775118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:130136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.994 [2024-04-17 06:53:52.775144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:130144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.994 [2024-04-17 06:53:52.775197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:129512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.775240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.775269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:129528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.775297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:129536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.775325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.775352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:129552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.775380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:129560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.775408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:129568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.775435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:129576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.775467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:129584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.775511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.775538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:129600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.775580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:129608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.775606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.775632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:129624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.775658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:129632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.775684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.775711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:129648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.775738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:129656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.775763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:129664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.775789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:129672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.775815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.775846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.775872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:129696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.775898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:129704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.775925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:129712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.775951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.994 [2024-04-17 06:53:52.775965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:129720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.994 [2024-04-17 06:53:52.775977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.775991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:129728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.995 [2024-04-17 06:53:52.776003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.776017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:129736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.995 [2024-04-17 06:53:52.776029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.776043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:129744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.995 [2024-04-17 06:53:52.776055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.776069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:129752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.995 [2024-04-17 06:53:52.776081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.776095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:129760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.995 [2024-04-17 06:53:52.776107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.776121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.995 [2024-04-17 06:53:52.776133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.776147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.995 [2024-04-17 06:53:52.776163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.776199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:129784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.995 [2024-04-17 06:53:52.776217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.776234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:129792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.995 [2024-04-17 06:53:52.776247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.776261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:129800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.995 [2024-04-17 06:53:52.776273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.776288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:129808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.995 [2024-04-17 06:53:52.776301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.776315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:129816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.995 [2024-04-17 06:53:52.776328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.776341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:129824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.995 [2024-04-17 06:53:52.776354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.776368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.995 [2024-04-17 06:53:52.776381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.776395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:129840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.995 [2024-04-17 06:53:52.776408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.776422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:129848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.995 [2024-04-17 06:53:52.776435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.776449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:129856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.995 [2024-04-17 06:53:52.776462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.776476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:129864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.995 [2024-04-17 06:53:52.776503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.776518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:129872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.995 [2024-04-17 06:53:52.776530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.776552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:129880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.995 [2024-04-17 06:53:52.776566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.776580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:130152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.995 [2024-04-17 06:53:52.776592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.776607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:130160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.995 [2024-04-17 06:53:52.776619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.776633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:130168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.995 [2024-04-17 06:53:52.776645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.776659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:130176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.995 [2024-04-17 06:53:52.776672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.776686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:130184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.995 [2024-04-17 06:53:52.776698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.776712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:130192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.995 [2024-04-17 06:53:52.776724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.776738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:130200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.995 [2024-04-17 06:53:52.776751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.776765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:130208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.995 [2024-04-17 06:53:52.776777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.776791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:130216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.995 [2024-04-17 06:53:52.776803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.776817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:129888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.995 [2024-04-17 06:53:52.776829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.776843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:129896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.995 [2024-04-17 06:53:52.776856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.995 [2024-04-17 06:53:52.776869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:129904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.995 [2024-04-17 06:53:52.776885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.776899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:129912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.996 [2024-04-17 06:53:52.776912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.776926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:129920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.996 [2024-04-17 06:53:52.776939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.776953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:129928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.996 [2024-04-17 06:53:52.776966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.776980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:129936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.996 [2024-04-17 06:53:52.776993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:130224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.996 [2024-04-17 06:53:52.777020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:130232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.996 [2024-04-17 06:53:52.777047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:130240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.996 [2024-04-17 06:53:52.777074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:130248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.996 [2024-04-17 06:53:52.777101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:130256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.996 [2024-04-17 06:53:52.777127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:130264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.996 [2024-04-17 06:53:52.777154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:130272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.996 [2024-04-17 06:53:52.777204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:130280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.996 [2024-04-17 06:53:52.777235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:130288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.996 [2024-04-17 06:53:52.777267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:130296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.996 [2024-04-17 06:53:52.777295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:130304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.996 [2024-04-17 06:53:52.777322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:130312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.996 [2024-04-17 06:53:52.777350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:130320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.996 [2024-04-17 06:53:52.777378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:130328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.996 [2024-04-17 06:53:52.777405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:130336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.996 [2024-04-17 06:53:52.777433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:130344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.996 [2024-04-17 06:53:52.777460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:130352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.996 [2024-04-17 06:53:52.777487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:130360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.996 [2024-04-17 06:53:52.777532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:130368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.996 [2024-04-17 06:53:52.777559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.996 [2024-04-17 06:53:52.777586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:130384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.996 [2024-04-17 06:53:52.777616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:130392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.996 [2024-04-17 06:53:52.777644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:130400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.996 [2024-04-17 06:53:52.777670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:129944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.996 [2024-04-17 06:53:52.777696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:129952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.996 [2024-04-17 06:53:52.777723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:129960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.996 [2024-04-17 06:53:52.777749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:129968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.996 [2024-04-17 06:53:52.777776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.996 [2024-04-17 06:53:52.777803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:129984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.996 [2024-04-17 06:53:52.777830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:129992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.996 [2024-04-17 06:53:52.777857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:54.996 [2024-04-17 06:53:52.777897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:54.996 [2024-04-17 06:53:52.777909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130000 len:8 PRP1 0x0 PRP2 0x0 00:27:54.996 [2024-04-17 06:53:52.777921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.777976] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x733bc0 was disconnected and freed. reset controller. 00:27:54.996 [2024-04-17 06:53:52.777993] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:27:54.996 [2024-04-17 06:53:52.778038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.996 [2024-04-17 06:53:52.778057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.778076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.996 [2024-04-17 06:53:52.778104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.996 [2024-04-17 06:53:52.778119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.997 [2024-04-17 06:53:52.778132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.997 [2024-04-17 06:53:52.778146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.997 [2024-04-17 06:53:52.778159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.997 [2024-04-17 06:53:52.778172] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:54.997 [2024-04-17 06:53:52.781587] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:54.997 [2024-04-17 06:53:52.781629] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x710e60 (9): Bad file descriptor 00:27:54.997 [2024-04-17 06:53:52.813438] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:54.997 00:27:54.997 Latency(us) 00:27:54.997 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:54.997 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:54.997 Verification LBA range: start 0x0 length 0x4000 00:27:54.997 NVMe0n1 : 15.01 8275.58 32.33 378.27 0.00 14762.47 782.79 19515.16 00:27:54.997 =================================================================================================================== 00:27:54.997 Total : 8275.58 32.33 378.27 0.00 14762.47 782.79 19515.16 00:27:54.997 Received shutdown signal, test time was about 15.000000 seconds 00:27:54.997 00:27:54.997 Latency(us) 00:27:54.997 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:54.997 =================================================================================================================== 00:27:54.997 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:54.997 06:53:58 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:54.997 06:53:58 -- host/failover.sh@65 -- # count=3 00:27:54.997 06:53:58 -- host/failover.sh@67 -- # (( count != 3 )) 00:27:54.997 06:53:58 -- host/failover.sh@73 -- # bdevperf_pid=90168 00:27:54.997 06:53:58 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:54.997 06:53:58 -- host/failover.sh@75 -- # waitforlisten 90168 /var/tmp/bdevperf.sock 00:27:54.997 06:53:58 -- common/autotest_common.sh@817 -- # '[' -z 90168 ']' 00:27:54.997 06:53:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:54.997 06:53:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:54.997 06:53:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:54.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:54.997 06:53:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:54.997 06:53:58 -- common/autotest_common.sh@10 -- # set +x 00:27:54.997 06:53:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:54.997 06:53:59 -- common/autotest_common.sh@850 -- # return 0 00:27:54.997 06:53:59 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:54.997 [2024-04-17 06:53:59.319567] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:54.997 06:53:59 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:54.997 [2024-04-17 06:53:59.552183] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:54.997 06:53:59 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:55.561 NVMe0n1 00:27:55.561 06:54:00 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:55.818 00:27:55.818 06:54:00 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:56.384 00:27:56.384 06:54:00 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:56.384 06:54:00 -- host/failover.sh@82 -- # grep -q NVMe0 00:27:56.642 06:54:01 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:56.899 06:54:01 -- host/failover.sh@87 -- # sleep 3 00:28:00.176 06:54:04 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:00.176 06:54:04 -- host/failover.sh@88 -- # grep -q NVMe0 00:28:00.176 06:54:04 -- host/failover.sh@90 -- # run_test_pid=90950 00:28:00.176 06:54:04 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:00.176 06:54:04 -- host/failover.sh@92 -- # wait 90950 00:28:01.108 0 00:28:01.108 06:54:05 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:01.108 [2024-04-17 06:53:58.851740] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:28:01.108 [2024-04-17 06:53:58.851843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90168 ] 00:28:01.108 EAL: No free 2048 kB hugepages reported on node 1 00:28:01.108 [2024-04-17 06:53:58.911693] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.108 [2024-04-17 06:53:58.994088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.108 [2024-04-17 06:54:01.300744] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:01.108 [2024-04-17 06:54:01.300835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.108 [2024-04-17 06:54:01.300857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-04-17 06:54:01.300873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.108 [2024-04-17 06:54:01.300886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-04-17 06:54:01.300916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.109 [2024-04-17 06:54:01.300929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-04-17 06:54:01.300943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.109 [2024-04-17 06:54:01.300956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-04-17 06:54:01.300970] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:01.109 [2024-04-17 06:54:01.301017] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:01.109 [2024-04-17 06:54:01.301051] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce1e60 (9): Bad file descriptor 00:28:01.109 [2024-04-17 06:54:01.403352] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:01.109 Running I/O for 1 seconds... 00:28:01.109 00:28:01.109 Latency(us) 00:28:01.109 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.109 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:01.109 Verification LBA range: start 0x0 length 0x4000 00:28:01.109 NVMe0n1 : 1.01 8114.20 31.70 0.00 0.00 15707.98 3325.35 20486.07 00:28:01.109 =================================================================================================================== 00:28:01.109 Total : 8114.20 31.70 0.00 0.00 15707.98 3325.35 20486.07 00:28:01.109 06:54:05 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:01.109 06:54:05 -- host/failover.sh@95 -- # grep -q NVMe0 00:28:01.366 06:54:05 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:01.623 06:54:06 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:01.623 06:54:06 -- host/failover.sh@99 -- # grep -q NVMe0 00:28:01.880 06:54:06 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:02.138 06:54:06 -- host/failover.sh@101 -- # sleep 3 00:28:05.455 06:54:09 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:05.455 06:54:09 -- host/failover.sh@103 -- # grep -q NVMe0 00:28:05.455 06:54:09 -- host/failover.sh@108 -- # killprocess 90168 00:28:05.455 06:54:09 -- common/autotest_common.sh@936 -- # '[' -z 90168 ']' 00:28:05.455 06:54:09 -- common/autotest_common.sh@940 -- # kill -0 90168 00:28:05.455 06:54:09 -- common/autotest_common.sh@941 -- # uname 00:28:05.455 06:54:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:05.455 06:54:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90168 00:28:05.455 06:54:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:05.455 06:54:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:05.455 06:54:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90168' 00:28:05.455 killing process with pid 90168 00:28:05.455 06:54:10 -- common/autotest_common.sh@955 -- # kill 90168 00:28:05.455 06:54:10 -- common/autotest_common.sh@960 -- # wait 90168 00:28:05.713 06:54:10 -- host/failover.sh@110 -- # sync 00:28:05.713 06:54:10 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:05.970 06:54:10 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:05.970 06:54:10 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:05.970 06:54:10 -- host/failover.sh@116 -- # nvmftestfini 00:28:05.970 06:54:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:05.970 06:54:10 -- nvmf/common.sh@117 -- # sync 00:28:05.970 06:54:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:05.970 06:54:10 -- nvmf/common.sh@120 -- # set +e 00:28:05.970 06:54:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:05.970 06:54:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:05.970 rmmod nvme_tcp 00:28:05.970 rmmod nvme_fabrics 00:28:05.970 rmmod nvme_keyring 00:28:05.970 06:54:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:05.970 06:54:10 -- nvmf/common.sh@124 -- # set -e 00:28:05.970 06:54:10 -- nvmf/common.sh@125 -- # return 0 00:28:05.970 06:54:10 -- nvmf/common.sh@478 -- # '[' -n 87910 ']' 00:28:05.970 06:54:10 -- nvmf/common.sh@479 -- # killprocess 87910 00:28:05.970 06:54:10 -- common/autotest_common.sh@936 -- # '[' -z 87910 ']' 00:28:05.970 06:54:10 -- common/autotest_common.sh@940 -- # kill -0 87910 00:28:05.970 06:54:10 -- common/autotest_common.sh@941 -- # uname 00:28:05.970 06:54:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:05.970 06:54:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87910 00:28:05.970 06:54:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:05.970 06:54:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:05.970 06:54:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87910' 00:28:05.970 killing process with pid 87910 00:28:05.970 06:54:10 -- common/autotest_common.sh@955 -- # kill 87910 00:28:05.970 06:54:10 -- common/autotest_common.sh@960 -- # wait 87910 00:28:06.229 06:54:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:28:06.229 06:54:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:06.229 06:54:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:06.229 06:54:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:06.229 06:54:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:06.229 06:54:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.229 06:54:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:06.229 06:54:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.757 06:54:12 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:08.757 00:28:08.757 real 0m34.667s 00:28:08.757 user 2m2.377s 00:28:08.757 sys 0m5.679s 00:28:08.757 06:54:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:08.757 06:54:12 -- common/autotest_common.sh@10 -- # set +x 00:28:08.757 ************************************ 00:28:08.757 END TEST nvmf_failover 00:28:08.757 ************************************ 00:28:08.757 06:54:12 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:08.757 06:54:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:08.757 06:54:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:08.757 06:54:12 -- common/autotest_common.sh@10 -- # set +x 00:28:08.757 ************************************ 00:28:08.757 START TEST nvmf_discovery 00:28:08.757 ************************************ 00:28:08.757 06:54:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:08.757 * Looking for test storage... 00:28:08.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:08.758 06:54:13 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:08.758 06:54:13 -- nvmf/common.sh@7 -- # uname -s 00:28:08.758 06:54:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:08.758 06:54:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:08.758 06:54:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:08.758 06:54:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:08.758 06:54:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:08.758 06:54:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:08.758 06:54:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:08.758 06:54:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:08.758 06:54:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:08.758 06:54:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:08.758 06:54:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:08.758 06:54:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:08.758 06:54:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:08.758 06:54:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:08.758 06:54:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:08.758 06:54:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:08.758 06:54:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:08.758 06:54:13 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:08.758 06:54:13 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:08.758 06:54:13 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:08.758 06:54:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.758 06:54:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.758 06:54:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.758 06:54:13 -- paths/export.sh@5 -- # export PATH 00:28:08.758 06:54:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.758 06:54:13 -- nvmf/common.sh@47 -- # : 0 00:28:08.758 06:54:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:08.758 06:54:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:08.758 06:54:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:08.758 06:54:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:08.758 06:54:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:08.758 06:54:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:08.758 06:54:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:08.758 06:54:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:08.758 06:54:13 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:28:08.758 06:54:13 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:28:08.758 06:54:13 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:28:08.758 06:54:13 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:28:08.758 06:54:13 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:28:08.758 06:54:13 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:28:08.758 06:54:13 -- host/discovery.sh@25 -- # nvmftestinit 00:28:08.758 06:54:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:08.758 06:54:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:08.758 06:54:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:08.758 06:54:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:08.758 06:54:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:08.758 06:54:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.758 06:54:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:08.758 06:54:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.758 06:54:13 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:28:08.758 06:54:13 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:28:08.758 06:54:13 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:08.758 06:54:13 -- common/autotest_common.sh@10 -- # set +x 00:28:10.657 06:54:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:10.657 06:54:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:10.657 06:54:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:10.657 06:54:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:10.657 06:54:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:10.657 06:54:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:10.657 06:54:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:10.657 06:54:14 -- nvmf/common.sh@295 -- # net_devs=() 00:28:10.657 06:54:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:10.657 06:54:14 -- nvmf/common.sh@296 -- # e810=() 00:28:10.657 06:54:14 -- nvmf/common.sh@296 -- # local -ga e810 00:28:10.657 06:54:14 -- nvmf/common.sh@297 -- # x722=() 00:28:10.657 06:54:14 -- nvmf/common.sh@297 -- # local -ga x722 00:28:10.657 06:54:14 -- nvmf/common.sh@298 -- # mlx=() 00:28:10.657 06:54:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:10.657 06:54:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:10.657 06:54:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:10.657 06:54:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:10.657 06:54:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:10.658 06:54:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:10.658 06:54:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:10.658 06:54:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:10.658 06:54:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:10.658 06:54:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:10.658 06:54:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:10.658 06:54:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:10.658 06:54:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:10.658 06:54:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:10.658 06:54:14 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:10.658 06:54:14 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:10.658 06:54:14 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:10.658 06:54:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:10.658 06:54:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:10.658 06:54:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:10.658 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:10.658 06:54:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:10.658 06:54:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:10.658 06:54:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.658 06:54:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.658 06:54:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:10.658 06:54:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:10.658 06:54:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:10.658 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:10.658 06:54:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:10.658 06:54:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:10.658 06:54:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.658 06:54:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.658 06:54:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:10.658 06:54:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:10.658 06:54:14 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:10.658 06:54:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:10.658 06:54:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:10.658 06:54:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.658 06:54:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:10.658 06:54:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.658 06:54:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:10.658 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:10.658 06:54:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.658 06:54:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:10.658 06:54:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.658 06:54:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:10.658 06:54:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.658 06:54:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:10.658 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:10.658 06:54:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.658 06:54:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:28:10.658 06:54:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:28:10.658 06:54:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:28:10.658 06:54:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:28:10.658 06:54:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:28:10.658 06:54:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:10.658 06:54:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:10.658 06:54:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:10.658 06:54:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:10.658 06:54:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:10.658 06:54:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:10.658 06:54:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:10.658 06:54:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:10.658 06:54:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:10.658 06:54:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:10.658 06:54:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:10.658 06:54:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:10.658 06:54:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:10.658 06:54:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:10.658 06:54:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:10.658 06:54:15 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:10.658 06:54:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:10.658 06:54:15 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:10.658 06:54:15 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:10.658 06:54:15 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:10.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:10.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:28:10.658 00:28:10.658 --- 10.0.0.2 ping statistics --- 00:28:10.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.658 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:28:10.658 06:54:15 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:10.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:10.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:28:10.658 00:28:10.658 --- 10.0.0.1 ping statistics --- 00:28:10.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.658 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:28:10.658 06:54:15 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:10.658 06:54:15 -- nvmf/common.sh@411 -- # return 0 00:28:10.658 06:54:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:28:10.658 06:54:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:10.658 06:54:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:10.658 06:54:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:10.658 06:54:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:10.658 06:54:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:10.658 06:54:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:10.658 06:54:15 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:28:10.658 06:54:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:10.658 06:54:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:10.658 06:54:15 -- common/autotest_common.sh@10 -- # set +x 00:28:10.658 06:54:15 -- nvmf/common.sh@470 -- # nvmfpid=94064 00:28:10.658 06:54:15 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:10.658 06:54:15 -- nvmf/common.sh@471 -- # waitforlisten 94064 00:28:10.658 06:54:15 -- common/autotest_common.sh@817 -- # '[' -z 94064 ']' 00:28:10.658 06:54:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.658 06:54:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:10.658 06:54:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:10.658 06:54:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:10.658 06:54:15 -- common/autotest_common.sh@10 -- # set +x 00:28:10.658 [2024-04-17 06:54:15.133389] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:28:10.658 [2024-04-17 06:54:15.133478] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:10.658 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.658 [2024-04-17 06:54:15.198700] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.917 [2024-04-17 06:54:15.283424] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:10.917 [2024-04-17 06:54:15.283500] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:10.917 [2024-04-17 06:54:15.283513] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:10.917 [2024-04-17 06:54:15.283535] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:10.917 [2024-04-17 06:54:15.283558] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:10.917 [2024-04-17 06:54:15.283585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.917 06:54:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:10.917 06:54:15 -- common/autotest_common.sh@850 -- # return 0 00:28:10.917 06:54:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:10.917 06:54:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:10.917 06:54:15 -- common/autotest_common.sh@10 -- # set +x 00:28:10.917 06:54:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:10.917 06:54:15 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:10.917 06:54:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.917 06:54:15 -- common/autotest_common.sh@10 -- # set +x 00:28:10.917 [2024-04-17 06:54:15.420717] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:10.917 06:54:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.917 06:54:15 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:28:10.917 06:54:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.917 06:54:15 -- common/autotest_common.sh@10 -- # set +x 00:28:10.917 [2024-04-17 06:54:15.428917] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:10.917 06:54:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.917 06:54:15 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:28:10.917 06:54:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.917 06:54:15 -- common/autotest_common.sh@10 -- # set +x 00:28:10.917 null0 00:28:10.917 06:54:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.917 06:54:15 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:28:10.917 06:54:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.917 06:54:15 -- common/autotest_common.sh@10 -- # set +x 00:28:10.917 null1 00:28:10.917 06:54:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.917 06:54:15 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:28:10.917 06:54:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.917 06:54:15 -- common/autotest_common.sh@10 -- # set +x 00:28:10.917 06:54:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.917 06:54:15 -- host/discovery.sh@45 -- # hostpid=94091 00:28:10.917 06:54:15 -- host/discovery.sh@46 -- # waitforlisten 94091 /tmp/host.sock 00:28:10.917 06:54:15 -- common/autotest_common.sh@817 -- # '[' -z 94091 ']' 00:28:10.917 06:54:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:28:10.917 06:54:15 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:28:10.917 06:54:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:10.917 06:54:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:10.917 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:10.917 06:54:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:10.917 06:54:15 -- common/autotest_common.sh@10 -- # set +x 00:28:10.917 [2024-04-17 06:54:15.503652] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:28:10.917 [2024-04-17 06:54:15.503729] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94091 ] 00:28:11.175 EAL: No free 2048 kB hugepages reported on node 1 00:28:11.175 [2024-04-17 06:54:15.566045] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.175 [2024-04-17 06:54:15.653365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.175 06:54:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:11.175 06:54:15 -- common/autotest_common.sh@850 -- # return 0 00:28:11.175 06:54:15 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:11.175 06:54:15 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:28:11.175 06:54:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.175 06:54:15 -- common/autotest_common.sh@10 -- # set +x 00:28:11.175 06:54:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.175 06:54:15 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:28:11.175 06:54:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.175 06:54:15 -- common/autotest_common.sh@10 -- # set +x 00:28:11.175 06:54:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.175 06:54:15 -- host/discovery.sh@72 -- # notify_id=0 00:28:11.175 06:54:15 -- host/discovery.sh@83 -- # get_subsystem_names 00:28:11.175 06:54:15 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:11.175 06:54:15 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:11.175 06:54:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.175 06:54:15 -- common/autotest_common.sh@10 -- # set +x 00:28:11.175 06:54:15 -- host/discovery.sh@59 -- # sort 00:28:11.175 06:54:15 -- host/discovery.sh@59 -- # xargs 00:28:11.433 06:54:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.433 06:54:15 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:28:11.433 06:54:15 -- host/discovery.sh@84 -- # get_bdev_list 00:28:11.433 06:54:15 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:11.433 06:54:15 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:11.433 06:54:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.433 06:54:15 -- host/discovery.sh@55 -- # sort 00:28:11.433 06:54:15 -- common/autotest_common.sh@10 -- # set +x 00:28:11.433 06:54:15 -- host/discovery.sh@55 -- # xargs 00:28:11.433 06:54:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.433 06:54:15 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:28:11.433 06:54:15 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:28:11.433 06:54:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.433 06:54:15 -- common/autotest_common.sh@10 -- # set +x 00:28:11.433 06:54:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.433 06:54:15 -- host/discovery.sh@87 -- # get_subsystem_names 00:28:11.433 06:54:15 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:11.433 06:54:15 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:11.433 06:54:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.433 06:54:15 -- common/autotest_common.sh@10 -- # set +x 00:28:11.433 06:54:15 -- host/discovery.sh@59 -- # sort 00:28:11.433 06:54:15 -- host/discovery.sh@59 -- # xargs 00:28:11.433 06:54:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.433 06:54:15 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:28:11.433 06:54:15 -- host/discovery.sh@88 -- # get_bdev_list 00:28:11.433 06:54:15 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:11.433 06:54:15 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:11.433 06:54:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.433 06:54:15 -- common/autotest_common.sh@10 -- # set +x 00:28:11.433 06:54:15 -- host/discovery.sh@55 -- # sort 00:28:11.433 06:54:15 -- host/discovery.sh@55 -- # xargs 00:28:11.433 06:54:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.433 06:54:15 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:28:11.433 06:54:15 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:28:11.433 06:54:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.433 06:54:15 -- common/autotest_common.sh@10 -- # set +x 00:28:11.433 06:54:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.433 06:54:15 -- host/discovery.sh@91 -- # get_subsystem_names 00:28:11.433 06:54:15 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:11.433 06:54:15 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:11.433 06:54:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.433 06:54:15 -- common/autotest_common.sh@10 -- # set +x 00:28:11.433 06:54:15 -- host/discovery.sh@59 -- # sort 00:28:11.433 06:54:15 -- host/discovery.sh@59 -- # xargs 00:28:11.433 06:54:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.433 06:54:16 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:28:11.433 06:54:16 -- host/discovery.sh@92 -- # get_bdev_list 00:28:11.433 06:54:16 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:11.433 06:54:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.434 06:54:16 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:11.434 06:54:16 -- common/autotest_common.sh@10 -- # set +x 00:28:11.434 06:54:16 -- host/discovery.sh@55 -- # sort 00:28:11.434 06:54:16 -- host/discovery.sh@55 -- # xargs 00:28:11.434 06:54:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.692 06:54:16 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:28:11.692 06:54:16 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:11.692 06:54:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.692 06:54:16 -- common/autotest_common.sh@10 -- # set +x 00:28:11.692 [2024-04-17 06:54:16.050631] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:11.692 06:54:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.692 06:54:16 -- host/discovery.sh@97 -- # get_subsystem_names 00:28:11.692 06:54:16 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:11.692 06:54:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.692 06:54:16 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:11.692 06:54:16 -- common/autotest_common.sh@10 -- # set +x 00:28:11.692 06:54:16 -- host/discovery.sh@59 -- # sort 00:28:11.692 06:54:16 -- host/discovery.sh@59 -- # xargs 00:28:11.692 06:54:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.692 06:54:16 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:28:11.692 06:54:16 -- host/discovery.sh@98 -- # get_bdev_list 00:28:11.692 06:54:16 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:11.692 06:54:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.692 06:54:16 -- common/autotest_common.sh@10 -- # set +x 00:28:11.692 06:54:16 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:11.692 06:54:16 -- host/discovery.sh@55 -- # sort 00:28:11.692 06:54:16 -- host/discovery.sh@55 -- # xargs 00:28:11.692 06:54:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.692 06:54:16 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:28:11.692 06:54:16 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:28:11.692 06:54:16 -- host/discovery.sh@79 -- # expected_count=0 00:28:11.692 06:54:16 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:11.692 06:54:16 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:11.692 06:54:16 -- common/autotest_common.sh@901 -- # local max=10 00:28:11.692 06:54:16 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:11.692 06:54:16 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:11.692 06:54:16 -- common/autotest_common.sh@903 -- # get_notification_count 00:28:11.692 06:54:16 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:11.692 06:54:16 -- host/discovery.sh@74 -- # jq '. | length' 00:28:11.692 06:54:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.692 06:54:16 -- common/autotest_common.sh@10 -- # set +x 00:28:11.692 06:54:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.692 06:54:16 -- host/discovery.sh@74 -- # notification_count=0 00:28:11.692 06:54:16 -- host/discovery.sh@75 -- # notify_id=0 00:28:11.692 06:54:16 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:28:11.692 06:54:16 -- common/autotest_common.sh@904 -- # return 0 00:28:11.692 06:54:16 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:28:11.692 06:54:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.692 06:54:16 -- common/autotest_common.sh@10 -- # set +x 00:28:11.692 06:54:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.692 06:54:16 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:11.692 06:54:16 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:11.692 06:54:16 -- common/autotest_common.sh@901 -- # local max=10 00:28:11.692 06:54:16 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:11.692 06:54:16 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:11.692 06:54:16 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:28:11.692 06:54:16 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:11.692 06:54:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.692 06:54:16 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:11.692 06:54:16 -- common/autotest_common.sh@10 -- # set +x 00:28:11.692 06:54:16 -- host/discovery.sh@59 -- # sort 00:28:11.692 06:54:16 -- host/discovery.sh@59 -- # xargs 00:28:11.692 06:54:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.692 06:54:16 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:28:11.692 06:54:16 -- common/autotest_common.sh@906 -- # sleep 1 00:28:12.257 [2024-04-17 06:54:16.829375] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:12.257 [2024-04-17 06:54:16.829402] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:12.257 [2024-04-17 06:54:16.829422] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:12.554 [2024-04-17 06:54:16.915725] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:12.554 [2024-04-17 06:54:16.978378] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:12.554 [2024-04-17 06:54:16.978400] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:12.812 06:54:17 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:12.812 06:54:17 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:12.812 06:54:17 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:28:12.812 06:54:17 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:12.812 06:54:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.812 06:54:17 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:12.812 06:54:17 -- common/autotest_common.sh@10 -- # set +x 00:28:12.812 06:54:17 -- host/discovery.sh@59 -- # sort 00:28:12.812 06:54:17 -- host/discovery.sh@59 -- # xargs 00:28:12.812 06:54:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.812 06:54:17 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.812 06:54:17 -- common/autotest_common.sh@904 -- # return 0 00:28:12.812 06:54:17 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:12.812 06:54:17 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:12.813 06:54:17 -- common/autotest_common.sh@901 -- # local max=10 00:28:12.813 06:54:17 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:12.813 06:54:17 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:28:12.813 06:54:17 -- common/autotest_common.sh@903 -- # get_bdev_list 00:28:12.813 06:54:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:12.813 06:54:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.813 06:54:17 -- common/autotest_common.sh@10 -- # set +x 00:28:12.813 06:54:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:12.813 06:54:17 -- host/discovery.sh@55 -- # sort 00:28:12.813 06:54:17 -- host/discovery.sh@55 -- # xargs 00:28:12.813 06:54:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.813 06:54:17 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:28:12.813 06:54:17 -- common/autotest_common.sh@904 -- # return 0 00:28:12.813 06:54:17 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:12.813 06:54:17 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:12.813 06:54:17 -- common/autotest_common.sh@901 -- # local max=10 00:28:12.813 06:54:17 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:12.813 06:54:17 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:28:12.813 06:54:17 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:28:12.813 06:54:17 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:12.813 06:54:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.813 06:54:17 -- common/autotest_common.sh@10 -- # set +x 00:28:12.813 06:54:17 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:12.813 06:54:17 -- host/discovery.sh@63 -- # sort -n 00:28:12.813 06:54:17 -- host/discovery.sh@63 -- # xargs 00:28:12.813 06:54:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.813 06:54:17 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:28:12.813 06:54:17 -- common/autotest_common.sh@904 -- # return 0 00:28:12.813 06:54:17 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:28:12.813 06:54:17 -- host/discovery.sh@79 -- # expected_count=1 00:28:12.813 06:54:17 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:12.813 06:54:17 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:12.813 06:54:17 -- common/autotest_common.sh@901 -- # local max=10 00:28:12.813 06:54:17 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:12.813 06:54:17 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:12.813 06:54:17 -- common/autotest_common.sh@903 -- # get_notification_count 00:28:12.813 06:54:17 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:12.813 06:54:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.813 06:54:17 -- host/discovery.sh@74 -- # jq '. | length' 00:28:12.813 06:54:17 -- common/autotest_common.sh@10 -- # set +x 00:28:12.813 06:54:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.813 06:54:17 -- host/discovery.sh@74 -- # notification_count=1 00:28:12.813 06:54:17 -- host/discovery.sh@75 -- # notify_id=1 00:28:12.813 06:54:17 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:28:12.813 06:54:17 -- common/autotest_common.sh@904 -- # return 0 00:28:12.813 06:54:17 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:28:12.813 06:54:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.813 06:54:17 -- common/autotest_common.sh@10 -- # set +x 00:28:12.813 06:54:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.813 06:54:17 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:12.813 06:54:17 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:12.813 06:54:17 -- common/autotest_common.sh@901 -- # local max=10 00:28:12.813 06:54:17 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:12.813 06:54:17 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:12.813 06:54:17 -- common/autotest_common.sh@903 -- # get_bdev_list 00:28:12.813 06:54:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:12.813 06:54:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.813 06:54:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:12.813 06:54:17 -- common/autotest_common.sh@10 -- # set +x 00:28:12.813 06:54:17 -- host/discovery.sh@55 -- # sort 00:28:12.813 06:54:17 -- host/discovery.sh@55 -- # xargs 00:28:13.071 06:54:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.071 06:54:17 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:13.071 06:54:17 -- common/autotest_common.sh@904 -- # return 0 00:28:13.071 06:54:17 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:28:13.071 06:54:17 -- host/discovery.sh@79 -- # expected_count=1 00:28:13.071 06:54:17 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:13.071 06:54:17 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:13.071 06:54:17 -- common/autotest_common.sh@901 -- # local max=10 00:28:13.071 06:54:17 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:13.071 06:54:17 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:13.071 06:54:17 -- common/autotest_common.sh@903 -- # get_notification_count 00:28:13.329 06:54:17 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:28:13.329 06:54:17 -- host/discovery.sh@74 -- # jq '. | length' 00:28:13.329 06:54:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.329 06:54:17 -- common/autotest_common.sh@10 -- # set +x 00:28:13.329 06:54:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.329 06:54:17 -- host/discovery.sh@74 -- # notification_count=1 00:28:13.329 06:54:17 -- host/discovery.sh@75 -- # notify_id=2 00:28:13.329 06:54:17 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:28:13.329 06:54:17 -- common/autotest_common.sh@904 -- # return 0 00:28:13.329 06:54:17 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:28:13.329 06:54:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.329 06:54:17 -- common/autotest_common.sh@10 -- # set +x 00:28:13.329 [2024-04-17 06:54:17.723563] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:13.329 [2024-04-17 06:54:17.723983] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:13.329 [2024-04-17 06:54:17.724021] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:13.329 06:54:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.329 06:54:17 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:13.329 06:54:17 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:13.329 06:54:17 -- common/autotest_common.sh@901 -- # local max=10 00:28:13.329 06:54:17 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:13.329 06:54:17 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:13.329 06:54:17 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:28:13.329 06:54:17 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:13.329 06:54:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.329 06:54:17 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:13.329 06:54:17 -- common/autotest_common.sh@10 -- # set +x 00:28:13.329 06:54:17 -- host/discovery.sh@59 -- # sort 00:28:13.329 06:54:17 -- host/discovery.sh@59 -- # xargs 00:28:13.329 06:54:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.329 06:54:17 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.329 06:54:17 -- common/autotest_common.sh@904 -- # return 0 00:28:13.329 06:54:17 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:13.329 06:54:17 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:13.329 06:54:17 -- common/autotest_common.sh@901 -- # local max=10 00:28:13.329 06:54:17 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:13.329 06:54:17 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:13.329 06:54:17 -- common/autotest_common.sh@903 -- # get_bdev_list 00:28:13.329 06:54:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:13.329 06:54:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:13.329 06:54:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.329 06:54:17 -- common/autotest_common.sh@10 -- # set +x 00:28:13.329 06:54:17 -- host/discovery.sh@55 -- # sort 00:28:13.329 06:54:17 -- host/discovery.sh@55 -- # xargs 00:28:13.329 06:54:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.329 [2024-04-17 06:54:17.811700] bdev_nvme.c:6830:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:28:13.329 06:54:17 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:13.329 06:54:17 -- common/autotest_common.sh@904 -- # return 0 00:28:13.329 06:54:17 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:28:13.329 06:54:17 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:28:13.329 06:54:17 -- common/autotest_common.sh@901 -- # local max=10 00:28:13.329 06:54:17 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:13.329 06:54:17 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:28:13.329 06:54:17 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:28:13.329 06:54:17 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:13.329 06:54:17 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:13.329 06:54:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.329 06:54:17 -- common/autotest_common.sh@10 -- # set +x 00:28:13.329 06:54:17 -- host/discovery.sh@63 -- # sort -n 00:28:13.329 06:54:17 -- host/discovery.sh@63 -- # xargs 00:28:13.329 06:54:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.329 06:54:17 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:28:13.329 06:54:17 -- common/autotest_common.sh@906 -- # sleep 1 00:28:13.587 [2024-04-17 06:54:18.073961] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:13.587 [2024-04-17 06:54:18.073988] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:13.587 [2024-04-17 06:54:18.073998] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:14.521 06:54:18 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:14.521 06:54:18 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:28:14.521 06:54:18 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:28:14.521 06:54:18 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:14.521 06:54:18 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:14.521 06:54:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.521 06:54:18 -- common/autotest_common.sh@10 -- # set +x 00:28:14.521 06:54:18 -- host/discovery.sh@63 -- # sort -n 00:28:14.521 06:54:18 -- host/discovery.sh@63 -- # xargs 00:28:14.521 06:54:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.521 06:54:18 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:28:14.521 06:54:18 -- common/autotest_common.sh@904 -- # return 0 00:28:14.521 06:54:18 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:28:14.521 06:54:18 -- host/discovery.sh@79 -- # expected_count=0 00:28:14.521 06:54:18 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:14.521 06:54:18 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:14.521 06:54:18 -- common/autotest_common.sh@901 -- # local max=10 00:28:14.521 06:54:18 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:14.521 06:54:18 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:14.521 06:54:18 -- common/autotest_common.sh@903 -- # get_notification_count 00:28:14.521 06:54:18 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:14.521 06:54:18 -- host/discovery.sh@74 -- # jq '. | length' 00:28:14.521 06:54:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.521 06:54:18 -- common/autotest_common.sh@10 -- # set +x 00:28:14.521 06:54:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.521 06:54:18 -- host/discovery.sh@74 -- # notification_count=0 00:28:14.521 06:54:18 -- host/discovery.sh@75 -- # notify_id=2 00:28:14.521 06:54:18 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:28:14.521 06:54:18 -- common/autotest_common.sh@904 -- # return 0 00:28:14.521 06:54:18 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:14.521 06:54:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.521 06:54:18 -- common/autotest_common.sh@10 -- # set +x 00:28:14.521 [2024-04-17 06:54:18.943836] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:14.521 [2024-04-17 06:54:18.943890] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:14.521 06:54:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.521 06:54:18 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:14.521 06:54:18 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:14.521 06:54:18 -- common/autotest_common.sh@901 -- # local max=10 00:28:14.521 06:54:18 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:14.521 06:54:18 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:14.521 [2024-04-17 06:54:18.948032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.521 [2024-04-17 06:54:18.948065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.521 [2024-04-17 06:54:18.948097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.521 [2024-04-17 06:54:18.948112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.521 [2024-04-17 06:54:18.948126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.521 [2024-04-17 06:54:18.948140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.521 [2024-04-17 06:54:18.948154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.521 [2024-04-17 06:54:18.948189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.521 [2024-04-17 06:54:18.948205] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f63450 is same with the state(5) to be set 00:28:14.521 06:54:18 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:28:14.521 06:54:18 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:14.521 06:54:18 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:14.521 06:54:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.521 06:54:18 -- common/autotest_common.sh@10 -- # set +x 00:28:14.521 06:54:18 -- host/discovery.sh@59 -- # sort 00:28:14.521 06:54:18 -- host/discovery.sh@59 -- # xargs 00:28:14.521 [2024-04-17 06:54:18.958026] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f63450 (9): Bad file descriptor 00:28:14.521 06:54:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.521 [2024-04-17 06:54:18.968067] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:14.521 [2024-04-17 06:54:18.968343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.521 [2024-04-17 06:54:18.968500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.521 [2024-04-17 06:54:18.968528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f63450 with addr=10.0.0.2, port=4420 00:28:14.521 [2024-04-17 06:54:18.968545] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f63450 is same with the state(5) to be set 00:28:14.521 [2024-04-17 06:54:18.968569] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f63450 (9): Bad file descriptor 00:28:14.521 [2024-04-17 06:54:18.968590] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:14.521 [2024-04-17 06:54:18.968606] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:14.521 [2024-04-17 06:54:18.968622] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:14.521 [2024-04-17 06:54:18.968642] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:14.521 [2024-04-17 06:54:18.978145] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:14.521 [2024-04-17 06:54:18.978349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.521 [2024-04-17 06:54:18.978489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.521 [2024-04-17 06:54:18.978515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f63450 with addr=10.0.0.2, port=4420 00:28:14.522 [2024-04-17 06:54:18.978532] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f63450 is same with the state(5) to be set 00:28:14.522 [2024-04-17 06:54:18.978554] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f63450 (9): Bad file descriptor 00:28:14.522 [2024-04-17 06:54:18.978591] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:14.522 [2024-04-17 06:54:18.978611] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:14.522 [2024-04-17 06:54:18.978624] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:14.522 [2024-04-17 06:54:18.978644] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:14.522 06:54:18 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.522 06:54:18 -- common/autotest_common.sh@904 -- # return 0 00:28:14.522 06:54:18 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:14.522 06:54:18 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:14.522 06:54:18 -- common/autotest_common.sh@901 -- # local max=10 00:28:14.522 06:54:18 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:14.522 06:54:18 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:14.522 [2024-04-17 06:54:18.988242] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:14.522 [2024-04-17 06:54:18.988427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.522 [2024-04-17 06:54:18.988617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.522 [2024-04-17 06:54:18.988644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f63450 with addr=10.0.0.2, port=4420 00:28:14.522 [2024-04-17 06:54:18.988661] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f63450 is same with the state(5) to be set 00:28:14.522 [2024-04-17 06:54:18.988683] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f63450 (9): Bad file descriptor 00:28:14.522 [2024-04-17 06:54:18.988705] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:14.522 [2024-04-17 06:54:18.988721] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:14.522 [2024-04-17 06:54:18.988739] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:14.522 [2024-04-17 06:54:18.988760] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:14.522 06:54:18 -- common/autotest_common.sh@903 -- # get_bdev_list 00:28:14.522 06:54:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:14.522 06:54:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.522 06:54:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:14.522 06:54:18 -- common/autotest_common.sh@10 -- # set +x 00:28:14.522 06:54:18 -- host/discovery.sh@55 -- # sort 00:28:14.522 06:54:18 -- host/discovery.sh@55 -- # xargs 00:28:14.522 [2024-04-17 06:54:18.998319] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:14.522 [2024-04-17 06:54:18.998482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.522 [2024-04-17 06:54:18.998639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.522 [2024-04-17 06:54:18.998666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f63450 with addr=10.0.0.2, port=4420 00:28:14.522 [2024-04-17 06:54:18.998683] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f63450 is same with the state(5) to be set 00:28:14.522 [2024-04-17 06:54:18.998719] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f63450 (9): Bad file descriptor 00:28:14.522 [2024-04-17 06:54:18.998773] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:14.522 [2024-04-17 06:54:18.998794] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:14.522 [2024-04-17 06:54:18.998809] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:14.522 [2024-04-17 06:54:18.998828] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:14.522 [2024-04-17 06:54:19.008395] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:14.522 [2024-04-17 06:54:19.008622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.522 [2024-04-17 06:54:19.008780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.522 [2024-04-17 06:54:19.008806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f63450 with addr=10.0.0.2, port=4420 00:28:14.522 [2024-04-17 06:54:19.008823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f63450 is same with the state(5) to be set 00:28:14.522 [2024-04-17 06:54:19.008846] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f63450 (9): Bad file descriptor 00:28:14.522 [2024-04-17 06:54:19.008866] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:14.522 [2024-04-17 06:54:19.008881] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:14.522 [2024-04-17 06:54:19.008894] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:14.522 [2024-04-17 06:54:19.008914] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:14.522 06:54:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.522 [2024-04-17 06:54:19.018482] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:14.522 [2024-04-17 06:54:19.018668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.522 [2024-04-17 06:54:19.018813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.522 [2024-04-17 06:54:19.018842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f63450 with addr=10.0.0.2, port=4420 00:28:14.522 [2024-04-17 06:54:19.018859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f63450 is same with the state(5) to be set 00:28:14.522 [2024-04-17 06:54:19.018882] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f63450 (9): Bad file descriptor 00:28:14.522 [2024-04-17 06:54:19.018923] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:14.522 [2024-04-17 06:54:19.018942] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:14.522 [2024-04-17 06:54:19.018956] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:14.522 [2024-04-17 06:54:19.018976] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:14.522 06:54:19 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:14.522 06:54:19 -- common/autotest_common.sh@904 -- # return 0 00:28:14.522 06:54:19 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:28:14.522 06:54:19 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:28:14.522 06:54:19 -- common/autotest_common.sh@901 -- # local max=10 00:28:14.522 06:54:19 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:14.522 06:54:19 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:28:14.522 [2024-04-17 06:54:19.028566] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:14.522 [2024-04-17 06:54:19.028827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.522 [2024-04-17 06:54:19.028994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.522 [2024-04-17 06:54:19.029020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f63450 with addr=10.0.0.2, port=4420 00:28:14.522 [2024-04-17 06:54:19.029036] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f63450 is same with the state(5) to be set 00:28:14.522 [2024-04-17 06:54:19.029059] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f63450 (9): Bad file descriptor 00:28:14.522 [2024-04-17 06:54:19.029080] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:14.522 [2024-04-17 06:54:19.029095] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:14.522 [2024-04-17 06:54:19.029109] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:14.522 [2024-04-17 06:54:19.029128] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:14.522 06:54:19 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:28:14.522 06:54:19 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:14.522 06:54:19 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:14.522 06:54:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.522 06:54:19 -- common/autotest_common.sh@10 -- # set +x 00:28:14.522 06:54:19 -- host/discovery.sh@63 -- # sort -n 00:28:14.522 06:54:19 -- host/discovery.sh@63 -- # xargs 00:28:14.522 [2024-04-17 06:54:19.038639] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:14.522 [2024-04-17 06:54:19.039695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.522 [2024-04-17 06:54:19.039917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.522 [2024-04-17 06:54:19.039946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f63450 with addr=10.0.0.2, port=4420 00:28:14.522 [2024-04-17 06:54:19.039964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f63450 is same with the state(5) to be set 00:28:14.522 [2024-04-17 06:54:19.039987] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f63450 (9): Bad file descriptor 00:28:14.522 [2024-04-17 06:54:19.040051] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:14.522 [2024-04-17 06:54:19.040072] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:14.522 [2024-04-17 06:54:19.040091] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:14.522 [2024-04-17 06:54:19.040113] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:14.522 06:54:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.522 [2024-04-17 06:54:19.048710] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:14.522 [2024-04-17 06:54:19.048942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.522 [2024-04-17 06:54:19.049107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.522 [2024-04-17 06:54:19.049133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f63450 with addr=10.0.0.2, port=4420 00:28:14.522 [2024-04-17 06:54:19.049149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f63450 is same with the state(5) to be set 00:28:14.522 [2024-04-17 06:54:19.049172] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f63450 (9): Bad file descriptor 00:28:14.522 [2024-04-17 06:54:19.049218] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:14.523 [2024-04-17 06:54:19.049237] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:14.523 [2024-04-17 06:54:19.049251] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:14.523 [2024-04-17 06:54:19.049270] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:14.523 [2024-04-17 06:54:19.058779] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:14.523 [2024-04-17 06:54:19.059071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.523 [2024-04-17 06:54:19.059209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.523 [2024-04-17 06:54:19.059237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f63450 with addr=10.0.0.2, port=4420 00:28:14.523 [2024-04-17 06:54:19.059254] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f63450 is same with the state(5) to be set 00:28:14.523 [2024-04-17 06:54:19.059277] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f63450 (9): Bad file descriptor 00:28:14.523 [2024-04-17 06:54:19.059313] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:14.523 [2024-04-17 06:54:19.059332] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:14.523 [2024-04-17 06:54:19.059346] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:14.523 [2024-04-17 06:54:19.059365] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:14.523 [2024-04-17 06:54:19.068847] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:14.523 [2024-04-17 06:54:19.069138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.523 [2024-04-17 06:54:19.069319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.523 [2024-04-17 06:54:19.069346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f63450 with addr=10.0.0.2, port=4420 00:28:14.523 [2024-04-17 06:54:19.069363] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f63450 is same with the state(5) to be set 00:28:14.523 [2024-04-17 06:54:19.069385] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f63450 (9): Bad file descriptor 00:28:14.523 [2024-04-17 06:54:19.069419] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:14.523 [2024-04-17 06:54:19.069447] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:14.523 [2024-04-17 06:54:19.069461] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:14.523 [2024-04-17 06:54:19.069501] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:14.523 [2024-04-17 06:54:19.070419] bdev_nvme.c:6693:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:28:14.523 [2024-04-17 06:54:19.070457] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:14.523 06:54:19 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:28:14.523 06:54:19 -- common/autotest_common.sh@906 -- # sleep 1 00:28:15.897 06:54:20 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:15.897 06:54:20 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:28:15.897 06:54:20 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:28:15.897 06:54:20 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:15.897 06:54:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.897 06:54:20 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:15.897 06:54:20 -- common/autotest_common.sh@10 -- # set +x 00:28:15.897 06:54:20 -- host/discovery.sh@63 -- # sort -n 00:28:15.897 06:54:20 -- host/discovery.sh@63 -- # xargs 00:28:15.897 06:54:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.897 06:54:20 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:28:15.897 06:54:20 -- common/autotest_common.sh@904 -- # return 0 00:28:15.897 06:54:20 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:28:15.897 06:54:20 -- host/discovery.sh@79 -- # expected_count=0 00:28:15.897 06:54:20 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:15.897 06:54:20 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:15.897 06:54:20 -- common/autotest_common.sh@901 -- # local max=10 00:28:15.897 06:54:20 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:15.897 06:54:20 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:15.897 06:54:20 -- common/autotest_common.sh@903 -- # get_notification_count 00:28:15.897 06:54:20 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:15.897 06:54:20 -- host/discovery.sh@74 -- # jq '. | length' 00:28:15.897 06:54:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.897 06:54:20 -- common/autotest_common.sh@10 -- # set +x 00:28:15.897 06:54:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.897 06:54:20 -- host/discovery.sh@74 -- # notification_count=0 00:28:15.897 06:54:20 -- host/discovery.sh@75 -- # notify_id=2 00:28:15.897 06:54:20 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:28:15.897 06:54:20 -- common/autotest_common.sh@904 -- # return 0 00:28:15.897 06:54:20 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:28:15.897 06:54:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.897 06:54:20 -- common/autotest_common.sh@10 -- # set +x 00:28:15.897 06:54:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.897 06:54:20 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:28:15.897 06:54:20 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:28:15.897 06:54:20 -- common/autotest_common.sh@901 -- # local max=10 00:28:15.897 06:54:20 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:15.897 06:54:20 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:28:15.897 06:54:20 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:28:15.897 06:54:20 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:15.897 06:54:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.897 06:54:20 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:15.897 06:54:20 -- common/autotest_common.sh@10 -- # set +x 00:28:15.897 06:54:20 -- host/discovery.sh@59 -- # sort 00:28:15.897 06:54:20 -- host/discovery.sh@59 -- # xargs 00:28:15.897 06:54:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.897 06:54:20 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:28:15.897 06:54:20 -- common/autotest_common.sh@904 -- # return 0 00:28:15.897 06:54:20 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:28:15.897 06:54:20 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:28:15.897 06:54:20 -- common/autotest_common.sh@901 -- # local max=10 00:28:15.897 06:54:20 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:15.897 06:54:20 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:28:15.898 06:54:20 -- common/autotest_common.sh@903 -- # get_bdev_list 00:28:15.898 06:54:20 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:15.898 06:54:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.898 06:54:20 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:15.898 06:54:20 -- common/autotest_common.sh@10 -- # set +x 00:28:15.898 06:54:20 -- host/discovery.sh@55 -- # sort 00:28:15.898 06:54:20 -- host/discovery.sh@55 -- # xargs 00:28:15.898 06:54:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.898 06:54:20 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:28:15.898 06:54:20 -- common/autotest_common.sh@904 -- # return 0 00:28:15.898 06:54:20 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:28:15.898 06:54:20 -- host/discovery.sh@79 -- # expected_count=2 00:28:15.898 06:54:20 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:15.898 06:54:20 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:15.898 06:54:20 -- common/autotest_common.sh@901 -- # local max=10 00:28:15.898 06:54:20 -- common/autotest_common.sh@902 -- # (( max-- )) 00:28:15.898 06:54:20 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:15.898 06:54:20 -- common/autotest_common.sh@903 -- # get_notification_count 00:28:15.898 06:54:20 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:15.898 06:54:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.898 06:54:20 -- host/discovery.sh@74 -- # jq '. | length' 00:28:15.898 06:54:20 -- common/autotest_common.sh@10 -- # set +x 00:28:15.898 06:54:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.898 06:54:20 -- host/discovery.sh@74 -- # notification_count=2 00:28:15.898 06:54:20 -- host/discovery.sh@75 -- # notify_id=4 00:28:15.898 06:54:20 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:28:15.898 06:54:20 -- common/autotest_common.sh@904 -- # return 0 00:28:15.898 06:54:20 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:15.898 06:54:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.898 06:54:20 -- common/autotest_common.sh@10 -- # set +x 00:28:16.832 [2024-04-17 06:54:21.350006] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:16.832 [2024-04-17 06:54:21.350042] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:16.832 [2024-04-17 06:54:21.350063] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:16.832 [2024-04-17 06:54:21.438380] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:28:17.091 [2024-04-17 06:54:21.504623] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:17.091 [2024-04-17 06:54:21.504666] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:17.091 06:54:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.091 06:54:21 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:17.091 06:54:21 -- common/autotest_common.sh@638 -- # local es=0 00:28:17.091 06:54:21 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:17.091 06:54:21 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:28:17.091 06:54:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:17.091 06:54:21 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:28:17.091 06:54:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:17.091 06:54:21 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:17.091 06:54:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.091 06:54:21 -- common/autotest_common.sh@10 -- # set +x 00:28:17.091 request: 00:28:17.091 { 00:28:17.091 "name": "nvme", 00:28:17.091 "trtype": "tcp", 00:28:17.091 "traddr": "10.0.0.2", 00:28:17.091 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:17.091 "adrfam": "ipv4", 00:28:17.091 "trsvcid": "8009", 00:28:17.091 "wait_for_attach": true, 00:28:17.091 "method": "bdev_nvme_start_discovery", 00:28:17.091 "req_id": 1 00:28:17.091 } 00:28:17.091 Got JSON-RPC error response 00:28:17.091 response: 00:28:17.091 { 00:28:17.091 "code": -17, 00:28:17.091 "message": "File exists" 00:28:17.091 } 00:28:17.091 06:54:21 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:28:17.091 06:54:21 -- common/autotest_common.sh@641 -- # es=1 00:28:17.091 06:54:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:17.091 06:54:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:17.091 06:54:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:17.091 06:54:21 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:28:17.091 06:54:21 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:17.091 06:54:21 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:17.091 06:54:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.091 06:54:21 -- common/autotest_common.sh@10 -- # set +x 00:28:17.091 06:54:21 -- host/discovery.sh@67 -- # sort 00:28:17.091 06:54:21 -- host/discovery.sh@67 -- # xargs 00:28:17.091 06:54:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.091 06:54:21 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:28:17.091 06:54:21 -- host/discovery.sh@146 -- # get_bdev_list 00:28:17.091 06:54:21 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:17.091 06:54:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.091 06:54:21 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:17.091 06:54:21 -- common/autotest_common.sh@10 -- # set +x 00:28:17.091 06:54:21 -- host/discovery.sh@55 -- # sort 00:28:17.091 06:54:21 -- host/discovery.sh@55 -- # xargs 00:28:17.091 06:54:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.091 06:54:21 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:17.091 06:54:21 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:17.091 06:54:21 -- common/autotest_common.sh@638 -- # local es=0 00:28:17.091 06:54:21 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:17.091 06:54:21 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:28:17.091 06:54:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:17.091 06:54:21 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:28:17.091 06:54:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:17.091 06:54:21 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:17.091 06:54:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.091 06:54:21 -- common/autotest_common.sh@10 -- # set +x 00:28:17.091 request: 00:28:17.091 { 00:28:17.091 "name": "nvme_second", 00:28:17.091 "trtype": "tcp", 00:28:17.091 "traddr": "10.0.0.2", 00:28:17.091 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:17.091 "adrfam": "ipv4", 00:28:17.091 "trsvcid": "8009", 00:28:17.091 "wait_for_attach": true, 00:28:17.091 "method": "bdev_nvme_start_discovery", 00:28:17.091 "req_id": 1 00:28:17.091 } 00:28:17.091 Got JSON-RPC error response 00:28:17.091 response: 00:28:17.091 { 00:28:17.091 "code": -17, 00:28:17.091 "message": "File exists" 00:28:17.091 } 00:28:17.091 06:54:21 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:28:17.091 06:54:21 -- common/autotest_common.sh@641 -- # es=1 00:28:17.091 06:54:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:17.091 06:54:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:17.091 06:54:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:17.091 06:54:21 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:28:17.091 06:54:21 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:17.091 06:54:21 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:17.091 06:54:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.091 06:54:21 -- host/discovery.sh@67 -- # sort 00:28:17.091 06:54:21 -- common/autotest_common.sh@10 -- # set +x 00:28:17.091 06:54:21 -- host/discovery.sh@67 -- # xargs 00:28:17.091 06:54:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.091 06:54:21 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:28:17.091 06:54:21 -- host/discovery.sh@152 -- # get_bdev_list 00:28:17.092 06:54:21 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:17.092 06:54:21 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:17.092 06:54:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.092 06:54:21 -- common/autotest_common.sh@10 -- # set +x 00:28:17.092 06:54:21 -- host/discovery.sh@55 -- # sort 00:28:17.092 06:54:21 -- host/discovery.sh@55 -- # xargs 00:28:17.092 06:54:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.350 06:54:21 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:17.350 06:54:21 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:17.350 06:54:21 -- common/autotest_common.sh@638 -- # local es=0 00:28:17.350 06:54:21 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:17.350 06:54:21 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:28:17.350 06:54:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:17.350 06:54:21 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:28:17.350 06:54:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:17.350 06:54:21 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:17.350 06:54:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.350 06:54:21 -- common/autotest_common.sh@10 -- # set +x 00:28:18.282 [2024-04-17 06:54:22.717077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.282 [2024-04-17 06:54:22.717276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.282 [2024-04-17 06:54:22.717316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f95420 with addr=10.0.0.2, port=8010 00:28:18.282 [2024-04-17 06:54:22.717344] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:18.282 [2024-04-17 06:54:22.717359] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:18.282 [2024-04-17 06:54:22.717374] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:19.214 [2024-04-17 06:54:23.719510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.214 [2024-04-17 06:54:23.719728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.214 [2024-04-17 06:54:23.719757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f95420 with addr=10.0.0.2, port=8010 00:28:19.214 [2024-04-17 06:54:23.719788] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:19.214 [2024-04-17 06:54:23.719804] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:19.214 [2024-04-17 06:54:23.719818] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:20.155 [2024-04-17 06:54:24.721689] bdev_nvme.c:6949:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:28:20.155 request: 00:28:20.155 { 00:28:20.155 "name": "nvme_second", 00:28:20.155 "trtype": "tcp", 00:28:20.155 "traddr": "10.0.0.2", 00:28:20.155 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:20.155 "adrfam": "ipv4", 00:28:20.155 "trsvcid": "8010", 00:28:20.155 "attach_timeout_ms": 3000, 00:28:20.155 "method": "bdev_nvme_start_discovery", 00:28:20.155 "req_id": 1 00:28:20.155 } 00:28:20.155 Got JSON-RPC error response 00:28:20.155 response: 00:28:20.155 { 00:28:20.155 "code": -110, 00:28:20.155 "message": "Connection timed out" 00:28:20.155 } 00:28:20.155 06:54:24 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:28:20.155 06:54:24 -- common/autotest_common.sh@641 -- # es=1 00:28:20.155 06:54:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:20.155 06:54:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:20.155 06:54:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:20.155 06:54:24 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:28:20.155 06:54:24 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:20.155 06:54:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.155 06:54:24 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:20.155 06:54:24 -- common/autotest_common.sh@10 -- # set +x 00:28:20.155 06:54:24 -- host/discovery.sh@67 -- # sort 00:28:20.155 06:54:24 -- host/discovery.sh@67 -- # xargs 00:28:20.155 06:54:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.442 06:54:24 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:28:20.442 06:54:24 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:28:20.442 06:54:24 -- host/discovery.sh@161 -- # kill 94091 00:28:20.442 06:54:24 -- host/discovery.sh@162 -- # nvmftestfini 00:28:20.442 06:54:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:20.442 06:54:24 -- nvmf/common.sh@117 -- # sync 00:28:20.442 06:54:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:20.442 06:54:24 -- nvmf/common.sh@120 -- # set +e 00:28:20.442 06:54:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:20.442 06:54:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:20.442 rmmod nvme_tcp 00:28:20.442 rmmod nvme_fabrics 00:28:20.442 rmmod nvme_keyring 00:28:20.442 06:54:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:20.442 06:54:24 -- nvmf/common.sh@124 -- # set -e 00:28:20.442 06:54:24 -- nvmf/common.sh@125 -- # return 0 00:28:20.442 06:54:24 -- nvmf/common.sh@478 -- # '[' -n 94064 ']' 00:28:20.442 06:54:24 -- nvmf/common.sh@479 -- # killprocess 94064 00:28:20.442 06:54:24 -- common/autotest_common.sh@936 -- # '[' -z 94064 ']' 00:28:20.442 06:54:24 -- common/autotest_common.sh@940 -- # kill -0 94064 00:28:20.442 06:54:24 -- common/autotest_common.sh@941 -- # uname 00:28:20.442 06:54:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:20.442 06:54:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94064 00:28:20.442 06:54:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:20.442 06:54:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:20.442 06:54:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94064' 00:28:20.442 killing process with pid 94064 00:28:20.442 06:54:24 -- common/autotest_common.sh@955 -- # kill 94064 00:28:20.442 06:54:24 -- common/autotest_common.sh@960 -- # wait 94064 00:28:20.704 06:54:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:28:20.704 06:54:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:20.705 06:54:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:20.705 06:54:25 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:20.705 06:54:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:20.705 06:54:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.705 06:54:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:20.705 06:54:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.606 06:54:27 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:22.606 00:28:22.606 real 0m14.119s 00:28:22.606 user 0m21.047s 00:28:22.606 sys 0m2.832s 00:28:22.606 06:54:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:22.606 06:54:27 -- common/autotest_common.sh@10 -- # set +x 00:28:22.606 ************************************ 00:28:22.606 END TEST nvmf_discovery 00:28:22.606 ************************************ 00:28:22.606 06:54:27 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:22.606 06:54:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:22.606 06:54:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:22.606 06:54:27 -- common/autotest_common.sh@10 -- # set +x 00:28:22.865 ************************************ 00:28:22.865 START TEST nvmf_discovery_remove_ifc 00:28:22.865 ************************************ 00:28:22.865 06:54:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:22.865 * Looking for test storage... 00:28:22.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:22.865 06:54:27 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:22.865 06:54:27 -- nvmf/common.sh@7 -- # uname -s 00:28:22.865 06:54:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:22.865 06:54:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:22.865 06:54:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:22.865 06:54:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:22.865 06:54:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:22.865 06:54:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:22.865 06:54:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:22.865 06:54:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:22.865 06:54:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:22.865 06:54:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:22.865 06:54:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:22.865 06:54:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:22.865 06:54:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:22.865 06:54:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:22.865 06:54:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:22.865 06:54:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:22.865 06:54:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:22.865 06:54:27 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:22.865 06:54:27 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:22.865 06:54:27 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:22.865 06:54:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.865 06:54:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.865 06:54:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.865 06:54:27 -- paths/export.sh@5 -- # export PATH 00:28:22.865 06:54:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.865 06:54:27 -- nvmf/common.sh@47 -- # : 0 00:28:22.865 06:54:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:22.865 06:54:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:22.865 06:54:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:22.865 06:54:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:22.865 06:54:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:22.865 06:54:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:22.865 06:54:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:22.865 06:54:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:22.865 06:54:27 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:22.865 06:54:27 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:22.865 06:54:27 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:22.865 06:54:27 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:22.865 06:54:27 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:22.865 06:54:27 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:22.865 06:54:27 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:22.865 06:54:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:22.865 06:54:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:22.865 06:54:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:22.865 06:54:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:22.865 06:54:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:22.865 06:54:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.865 06:54:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:22.865 06:54:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.865 06:54:27 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:28:22.865 06:54:27 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:28:22.865 06:54:27 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:22.866 06:54:27 -- common/autotest_common.sh@10 -- # set +x 00:28:24.768 06:54:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:24.768 06:54:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:24.768 06:54:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:24.768 06:54:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:24.768 06:54:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:24.768 06:54:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:24.768 06:54:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:24.768 06:54:29 -- nvmf/common.sh@295 -- # net_devs=() 00:28:24.768 06:54:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:24.768 06:54:29 -- nvmf/common.sh@296 -- # e810=() 00:28:24.768 06:54:29 -- nvmf/common.sh@296 -- # local -ga e810 00:28:24.768 06:54:29 -- nvmf/common.sh@297 -- # x722=() 00:28:24.768 06:54:29 -- nvmf/common.sh@297 -- # local -ga x722 00:28:24.768 06:54:29 -- nvmf/common.sh@298 -- # mlx=() 00:28:24.768 06:54:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:24.768 06:54:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:24.768 06:54:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:24.768 06:54:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:24.768 06:54:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:24.768 06:54:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:24.768 06:54:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:24.768 06:54:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:24.768 06:54:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:24.768 06:54:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:24.768 06:54:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:24.768 06:54:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:24.768 06:54:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:24.768 06:54:29 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:24.768 06:54:29 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:24.768 06:54:29 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:24.768 06:54:29 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:24.768 06:54:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:24.768 06:54:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:24.768 06:54:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:24.768 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:24.768 06:54:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:24.768 06:54:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:24.768 06:54:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.768 06:54:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.768 06:54:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:24.768 06:54:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:24.768 06:54:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:24.768 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:24.768 06:54:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:24.768 06:54:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:24.768 06:54:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.768 06:54:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.769 06:54:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:24.769 06:54:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:24.769 06:54:29 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:24.769 06:54:29 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:24.769 06:54:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:24.769 06:54:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.769 06:54:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:24.769 06:54:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.769 06:54:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:24.769 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:24.769 06:54:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.769 06:54:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:24.769 06:54:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.769 06:54:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:24.769 06:54:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.769 06:54:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:24.769 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:24.769 06:54:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.769 06:54:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:28:24.769 06:54:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:28:24.769 06:54:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:28:24.769 06:54:29 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:28:24.769 06:54:29 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:28:24.769 06:54:29 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:24.769 06:54:29 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:24.769 06:54:29 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:24.769 06:54:29 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:24.769 06:54:29 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:24.769 06:54:29 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:24.769 06:54:29 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:24.769 06:54:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:24.769 06:54:29 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:24.769 06:54:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:24.769 06:54:29 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:24.769 06:54:29 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:24.769 06:54:29 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:24.769 06:54:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:24.769 06:54:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:24.769 06:54:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:24.769 06:54:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:24.769 06:54:29 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:24.769 06:54:29 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:24.769 06:54:29 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:24.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:24.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:28:24.769 00:28:24.769 --- 10.0.0.2 ping statistics --- 00:28:24.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.769 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:28:24.769 06:54:29 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:24.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:24.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:28:24.769 00:28:24.769 --- 10.0.0.1 ping statistics --- 00:28:24.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.769 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:28:24.769 06:54:29 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:24.769 06:54:29 -- nvmf/common.sh@411 -- # return 0 00:28:24.769 06:54:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:28:24.769 06:54:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:24.769 06:54:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:24.769 06:54:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:24.769 06:54:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:24.769 06:54:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:24.769 06:54:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:24.769 06:54:29 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:24.769 06:54:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:24.769 06:54:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:24.769 06:54:29 -- common/autotest_common.sh@10 -- # set +x 00:28:24.769 06:54:29 -- nvmf/common.sh@470 -- # nvmfpid=97388 00:28:24.769 06:54:29 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:24.769 06:54:29 -- nvmf/common.sh@471 -- # waitforlisten 97388 00:28:24.769 06:54:29 -- common/autotest_common.sh@817 -- # '[' -z 97388 ']' 00:28:24.769 06:54:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.769 06:54:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:24.769 06:54:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.769 06:54:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:24.769 06:54:29 -- common/autotest_common.sh@10 -- # set +x 00:28:25.028 [2024-04-17 06:54:29.406140] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:28:25.028 [2024-04-17 06:54:29.406244] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:25.028 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.028 [2024-04-17 06:54:29.469837] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.028 [2024-04-17 06:54:29.551890] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:25.028 [2024-04-17 06:54:29.551950] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:25.028 [2024-04-17 06:54:29.551963] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:25.028 [2024-04-17 06:54:29.551990] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:25.028 [2024-04-17 06:54:29.552000] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:25.028 [2024-04-17 06:54:29.552028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.286 06:54:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:25.286 06:54:29 -- common/autotest_common.sh@850 -- # return 0 00:28:25.286 06:54:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:25.286 06:54:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:25.286 06:54:29 -- common/autotest_common.sh@10 -- # set +x 00:28:25.286 06:54:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:25.286 06:54:29 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:25.286 06:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:25.286 06:54:29 -- common/autotest_common.sh@10 -- # set +x 00:28:25.286 [2024-04-17 06:54:29.705301] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:25.286 [2024-04-17 06:54:29.713450] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:25.286 null0 00:28:25.286 [2024-04-17 06:54:29.745415] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:25.286 06:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:25.286 06:54:29 -- host/discovery_remove_ifc.sh@59 -- # hostpid=97409 00:28:25.286 06:54:29 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:25.286 06:54:29 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 97409 /tmp/host.sock 00:28:25.286 06:54:29 -- common/autotest_common.sh@817 -- # '[' -z 97409 ']' 00:28:25.286 06:54:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:28:25.286 06:54:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:25.286 06:54:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:25.286 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:25.286 06:54:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:25.286 06:54:29 -- common/autotest_common.sh@10 -- # set +x 00:28:25.286 [2024-04-17 06:54:29.815298] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:28:25.286 [2024-04-17 06:54:29.815378] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97409 ] 00:28:25.286 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.286 [2024-04-17 06:54:29.881562] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.545 [2024-04-17 06:54:29.969561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.545 06:54:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:25.545 06:54:30 -- common/autotest_common.sh@850 -- # return 0 00:28:25.545 06:54:30 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:25.545 06:54:30 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:25.545 06:54:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:25.545 06:54:30 -- common/autotest_common.sh@10 -- # set +x 00:28:25.545 06:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:25.545 06:54:30 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:25.545 06:54:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:25.545 06:54:30 -- common/autotest_common.sh@10 -- # set +x 00:28:25.545 06:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:25.545 06:54:30 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:25.545 06:54:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:25.545 06:54:30 -- common/autotest_common.sh@10 -- # set +x 00:28:26.918 [2024-04-17 06:54:31.153292] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:26.918 [2024-04-17 06:54:31.153329] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:26.918 [2024-04-17 06:54:31.153351] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:26.918 [2024-04-17 06:54:31.239642] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:26.918 [2024-04-17 06:54:31.385395] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:26.918 [2024-04-17 06:54:31.385458] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:26.918 [2024-04-17 06:54:31.385515] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:26.918 [2024-04-17 06:54:31.385551] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:26.918 [2024-04-17 06:54:31.385582] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:26.918 06:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:26.918 06:54:31 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:26.918 06:54:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:26.918 06:54:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:26.918 06:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:26.918 06:54:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:26.918 06:54:31 -- common/autotest_common.sh@10 -- # set +x 00:28:26.918 06:54:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:26.918 06:54:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:26.918 [2024-04-17 06:54:31.391514] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xbc3dc0 was disconnected and freed. delete nvme_qpair. 00:28:26.918 06:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:26.918 06:54:31 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:26.918 06:54:31 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:28:26.918 06:54:31 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:28:26.918 06:54:31 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:26.918 06:54:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:26.918 06:54:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:26.918 06:54:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:26.918 06:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:26.918 06:54:31 -- common/autotest_common.sh@10 -- # set +x 00:28:26.918 06:54:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:26.918 06:54:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:26.918 06:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:27.176 06:54:31 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:27.176 06:54:31 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:28.109 06:54:32 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:28.109 06:54:32 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:28.109 06:54:32 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:28.109 06:54:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:28.109 06:54:32 -- common/autotest_common.sh@10 -- # set +x 00:28:28.109 06:54:32 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:28.109 06:54:32 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:28.109 06:54:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:28.109 06:54:32 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:28.109 06:54:32 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:29.043 06:54:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:29.043 06:54:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:29.043 06:54:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:29.043 06:54:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:29.043 06:54:33 -- common/autotest_common.sh@10 -- # set +x 00:28:29.043 06:54:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:29.043 06:54:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:29.043 06:54:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:29.043 06:54:33 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:29.043 06:54:33 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:30.414 06:54:34 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:30.414 06:54:34 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:30.414 06:54:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:30.414 06:54:34 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:30.414 06:54:34 -- common/autotest_common.sh@10 -- # set +x 00:28:30.414 06:54:34 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:30.414 06:54:34 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:30.414 06:54:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:30.414 06:54:34 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:30.414 06:54:34 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:31.346 06:54:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:31.346 06:54:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:31.346 06:54:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:31.346 06:54:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:31.346 06:54:35 -- common/autotest_common.sh@10 -- # set +x 00:28:31.346 06:54:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:31.347 06:54:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:31.347 06:54:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:31.347 06:54:35 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:31.347 06:54:35 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:32.280 06:54:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:32.280 06:54:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:32.280 06:54:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:32.280 06:54:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:32.280 06:54:36 -- common/autotest_common.sh@10 -- # set +x 00:28:32.280 06:54:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:32.280 06:54:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:32.280 06:54:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:32.280 06:54:36 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:32.280 06:54:36 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:32.280 [2024-04-17 06:54:36.826633] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:32.280 [2024-04-17 06:54:36.826697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:32.280 [2024-04-17 06:54:36.826717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.280 [2024-04-17 06:54:36.826740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:32.280 [2024-04-17 06:54:36.826753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.280 [2024-04-17 06:54:36.826766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:32.280 [2024-04-17 06:54:36.826778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.280 [2024-04-17 06:54:36.826790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:32.280 [2024-04-17 06:54:36.826802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.280 [2024-04-17 06:54:36.826815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:32.280 [2024-04-17 06:54:36.826828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.280 [2024-04-17 06:54:36.826840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8a0b0 is same with the state(5) to be set 00:28:32.280 [2024-04-17 06:54:36.836649] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8a0b0 (9): Bad file descriptor 00:28:32.280 [2024-04-17 06:54:36.846693] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:33.212 06:54:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:33.212 06:54:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:33.212 06:54:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:33.212 06:54:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:33.212 06:54:37 -- common/autotest_common.sh@10 -- # set +x 00:28:33.212 06:54:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:33.212 06:54:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:33.470 [2024-04-17 06:54:37.903217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:34.403 [2024-04-17 06:54:38.927217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:34.403 [2024-04-17 06:54:38.927310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8a0b0 with addr=10.0.0.2, port=4420 00:28:34.403 [2024-04-17 06:54:38.927336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8a0b0 is same with the state(5) to be set 00:28:34.403 [2024-04-17 06:54:38.927838] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8a0b0 (9): Bad file descriptor 00:28:34.403 [2024-04-17 06:54:38.927879] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.403 [2024-04-17 06:54:38.927914] bdev_nvme.c:6657:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:28:34.403 [2024-04-17 06:54:38.927965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.403 [2024-04-17 06:54:38.927985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.403 [2024-04-17 06:54:38.928003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.403 [2024-04-17 06:54:38.928016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.403 [2024-04-17 06:54:38.928029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.403 [2024-04-17 06:54:38.928042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.403 [2024-04-17 06:54:38.928064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.403 [2024-04-17 06:54:38.928077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.403 [2024-04-17 06:54:38.928090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.403 [2024-04-17 06:54:38.928102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.403 [2024-04-17 06:54:38.928114] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:28:34.403 [2024-04-17 06:54:38.928373] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8a4c0 (9): Bad file descriptor 00:28:34.403 [2024-04-17 06:54:38.929391] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:34.403 [2024-04-17 06:54:38.929413] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:28:34.403 06:54:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:34.403 06:54:38 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:34.403 06:54:38 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:35.775 06:54:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:35.776 06:54:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:35.776 06:54:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:35.776 06:54:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.776 06:54:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:35.776 06:54:39 -- common/autotest_common.sh@10 -- # set +x 00:28:35.776 06:54:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:35.776 06:54:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.776 06:54:39 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:35.776 06:54:39 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:35.776 06:54:39 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:35.776 06:54:40 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:35.776 06:54:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:35.776 06:54:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:35.776 06:54:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:35.776 06:54:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.776 06:54:40 -- common/autotest_common.sh@10 -- # set +x 00:28:35.776 06:54:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:35.776 06:54:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:35.776 06:54:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.776 06:54:40 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:35.776 06:54:40 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:36.367 [2024-04-17 06:54:40.939113] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:36.367 [2024-04-17 06:54:40.939150] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:36.367 [2024-04-17 06:54:40.939171] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:36.631 [2024-04-17 06:54:41.067640] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:36.631 06:54:41 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:36.631 06:54:41 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:36.631 06:54:41 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:36.631 06:54:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:36.631 06:54:41 -- common/autotest_common.sh@10 -- # set +x 00:28:36.631 06:54:41 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:36.631 06:54:41 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:36.631 06:54:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:36.631 06:54:41 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:36.631 06:54:41 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:36.889 [2024-04-17 06:54:41.290077] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:36.889 [2024-04-17 06:54:41.290129] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:36.889 [2024-04-17 06:54:41.290182] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:36.889 [2024-04-17 06:54:41.290206] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:36.889 [2024-04-17 06:54:41.290220] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:36.889 [2024-04-17 06:54:41.297797] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xbce4c0 was disconnected and freed. delete nvme_qpair. 00:28:37.824 06:54:42 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:37.824 06:54:42 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:37.824 06:54:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:37.824 06:54:42 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:37.824 06:54:42 -- common/autotest_common.sh@10 -- # set +x 00:28:37.824 06:54:42 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:37.824 06:54:42 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:37.824 06:54:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:37.824 06:54:42 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:37.824 06:54:42 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:37.824 06:54:42 -- host/discovery_remove_ifc.sh@90 -- # killprocess 97409 00:28:37.824 06:54:42 -- common/autotest_common.sh@936 -- # '[' -z 97409 ']' 00:28:37.824 06:54:42 -- common/autotest_common.sh@940 -- # kill -0 97409 00:28:37.824 06:54:42 -- common/autotest_common.sh@941 -- # uname 00:28:37.824 06:54:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:37.824 06:54:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97409 00:28:37.824 06:54:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:37.824 06:54:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:37.824 06:54:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97409' 00:28:37.824 killing process with pid 97409 00:28:37.824 06:54:42 -- common/autotest_common.sh@955 -- # kill 97409 00:28:37.824 06:54:42 -- common/autotest_common.sh@960 -- # wait 97409 00:28:37.824 06:54:42 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:37.824 06:54:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:37.824 06:54:42 -- nvmf/common.sh@117 -- # sync 00:28:38.082 06:54:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:38.082 06:54:42 -- nvmf/common.sh@120 -- # set +e 00:28:38.082 06:54:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:38.082 06:54:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:38.082 rmmod nvme_tcp 00:28:38.082 rmmod nvme_fabrics 00:28:38.082 rmmod nvme_keyring 00:28:38.082 06:54:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:38.082 06:54:42 -- nvmf/common.sh@124 -- # set -e 00:28:38.082 06:54:42 -- nvmf/common.sh@125 -- # return 0 00:28:38.082 06:54:42 -- nvmf/common.sh@478 -- # '[' -n 97388 ']' 00:28:38.082 06:54:42 -- nvmf/common.sh@479 -- # killprocess 97388 00:28:38.082 06:54:42 -- common/autotest_common.sh@936 -- # '[' -z 97388 ']' 00:28:38.082 06:54:42 -- common/autotest_common.sh@940 -- # kill -0 97388 00:28:38.082 06:54:42 -- common/autotest_common.sh@941 -- # uname 00:28:38.082 06:54:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:38.082 06:54:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97388 00:28:38.082 06:54:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:38.082 06:54:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:38.082 06:54:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97388' 00:28:38.082 killing process with pid 97388 00:28:38.082 06:54:42 -- common/autotest_common.sh@955 -- # kill 97388 00:28:38.082 06:54:42 -- common/autotest_common.sh@960 -- # wait 97388 00:28:38.340 06:54:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:28:38.340 06:54:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:38.340 06:54:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:38.340 06:54:42 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:38.340 06:54:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:38.340 06:54:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.340 06:54:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:38.340 06:54:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:40.241 06:54:44 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:40.241 00:28:40.241 real 0m17.568s 00:28:40.241 user 0m24.547s 00:28:40.241 sys 0m2.904s 00:28:40.241 06:54:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:40.241 06:54:44 -- common/autotest_common.sh@10 -- # set +x 00:28:40.241 ************************************ 00:28:40.241 END TEST nvmf_discovery_remove_ifc 00:28:40.241 ************************************ 00:28:40.241 06:54:44 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:40.241 06:54:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:40.241 06:54:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:40.241 06:54:44 -- common/autotest_common.sh@10 -- # set +x 00:28:40.499 ************************************ 00:28:40.499 START TEST nvmf_identify_kernel_target 00:28:40.499 ************************************ 00:28:40.499 06:54:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:40.499 * Looking for test storage... 00:28:40.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:40.499 06:54:44 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:40.499 06:54:44 -- nvmf/common.sh@7 -- # uname -s 00:28:40.499 06:54:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:40.499 06:54:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:40.499 06:54:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:40.499 06:54:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:40.499 06:54:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:40.499 06:54:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:40.499 06:54:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:40.499 06:54:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:40.499 06:54:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:40.499 06:54:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:40.499 06:54:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:40.499 06:54:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:40.499 06:54:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:40.499 06:54:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:40.499 06:54:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:40.499 06:54:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:40.499 06:54:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:40.499 06:54:44 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:40.499 06:54:44 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:40.499 06:54:44 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:40.499 06:54:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.499 06:54:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.500 06:54:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.500 06:54:44 -- paths/export.sh@5 -- # export PATH 00:28:40.500 06:54:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.500 06:54:44 -- nvmf/common.sh@47 -- # : 0 00:28:40.500 06:54:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:40.500 06:54:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:40.500 06:54:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:40.500 06:54:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:40.500 06:54:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:40.500 06:54:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:40.500 06:54:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:40.500 06:54:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:40.500 06:54:44 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:28:40.500 06:54:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:40.500 06:54:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:40.500 06:54:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:40.500 06:54:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:40.500 06:54:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:40.500 06:54:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.500 06:54:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:40.500 06:54:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:40.500 06:54:44 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:28:40.500 06:54:44 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:28:40.500 06:54:44 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:40.500 06:54:44 -- common/autotest_common.sh@10 -- # set +x 00:28:42.401 06:54:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:42.402 06:54:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:42.402 06:54:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:42.402 06:54:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:42.402 06:54:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:42.402 06:54:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:42.402 06:54:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:42.402 06:54:46 -- nvmf/common.sh@295 -- # net_devs=() 00:28:42.402 06:54:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:42.402 06:54:46 -- nvmf/common.sh@296 -- # e810=() 00:28:42.402 06:54:46 -- nvmf/common.sh@296 -- # local -ga e810 00:28:42.402 06:54:46 -- nvmf/common.sh@297 -- # x722=() 00:28:42.402 06:54:46 -- nvmf/common.sh@297 -- # local -ga x722 00:28:42.402 06:54:46 -- nvmf/common.sh@298 -- # mlx=() 00:28:42.402 06:54:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:42.402 06:54:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:42.402 06:54:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:42.402 06:54:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:42.402 06:54:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:42.402 06:54:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:42.402 06:54:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:42.402 06:54:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:42.402 06:54:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:42.402 06:54:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:42.402 06:54:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:42.402 06:54:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:42.402 06:54:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:42.402 06:54:46 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:42.402 06:54:46 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:42.402 06:54:46 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:42.402 06:54:46 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:42.402 06:54:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:42.402 06:54:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:42.402 06:54:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:42.402 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:42.402 06:54:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:42.402 06:54:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:42.402 06:54:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.402 06:54:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.402 06:54:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:42.402 06:54:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:42.402 06:54:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:42.402 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:42.402 06:54:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:42.402 06:54:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:42.402 06:54:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.402 06:54:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.402 06:54:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:42.402 06:54:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:42.402 06:54:46 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:42.402 06:54:46 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:42.402 06:54:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:42.402 06:54:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.402 06:54:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:42.402 06:54:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.402 06:54:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:42.402 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:42.402 06:54:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.402 06:54:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:42.402 06:54:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.402 06:54:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:42.402 06:54:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.402 06:54:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:42.402 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:42.402 06:54:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.402 06:54:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:28:42.402 06:54:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:28:42.402 06:54:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:28:42.402 06:54:46 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:28:42.402 06:54:46 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:28:42.402 06:54:46 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:42.402 06:54:46 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:42.402 06:54:46 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:42.402 06:54:46 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:42.402 06:54:46 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:42.402 06:54:46 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:42.402 06:54:46 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:42.402 06:54:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:42.402 06:54:46 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:42.402 06:54:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:42.402 06:54:46 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:42.402 06:54:46 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:42.402 06:54:46 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:42.402 06:54:46 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:42.402 06:54:46 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:42.402 06:54:46 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:42.402 06:54:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:42.660 06:54:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:42.660 06:54:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:42.660 06:54:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:42.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:42.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:28:42.660 00:28:42.660 --- 10.0.0.2 ping statistics --- 00:28:42.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.660 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:28:42.660 06:54:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:42.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:42.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:28:42.660 00:28:42.660 --- 10.0.0.1 ping statistics --- 00:28:42.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.660 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:28:42.660 06:54:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:42.660 06:54:47 -- nvmf/common.sh@411 -- # return 0 00:28:42.660 06:54:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:28:42.660 06:54:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:42.660 06:54:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:42.660 06:54:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:42.660 06:54:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:42.660 06:54:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:42.660 06:54:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:42.660 06:54:47 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:42.660 06:54:47 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:42.660 06:54:47 -- nvmf/common.sh@717 -- # local ip 00:28:42.660 06:54:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:42.660 06:54:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:42.660 06:54:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.660 06:54:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.660 06:54:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:42.660 06:54:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.661 06:54:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:42.661 06:54:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:42.661 06:54:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:42.661 06:54:47 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:42.661 06:54:47 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:42.661 06:54:47 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:42.661 06:54:47 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:28:42.661 06:54:47 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:42.661 06:54:47 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:42.661 06:54:47 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:42.661 06:54:47 -- nvmf/common.sh@628 -- # local block nvme 00:28:42.661 06:54:47 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:28:42.661 06:54:47 -- nvmf/common.sh@631 -- # modprobe nvmet 00:28:42.661 06:54:47 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:42.661 06:54:47 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:43.596 Waiting for block devices as requested 00:28:43.854 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:28:43.854 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:43.854 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:43.854 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:44.111 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:44.111 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:44.111 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:44.111 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:44.369 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:44.369 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:44.369 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:44.369 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:44.626 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:44.626 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:44.626 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:44.884 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:44.884 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:44.884 06:54:49 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:28:44.884 06:54:49 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:44.884 06:54:49 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:28:44.884 06:54:49 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:28:44.884 06:54:49 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:44.884 06:54:49 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:44.884 06:54:49 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:28:44.884 06:54:49 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:44.885 06:54:49 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:44.885 No valid GPT data, bailing 00:28:44.885 06:54:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:45.145 06:54:49 -- scripts/common.sh@391 -- # pt= 00:28:45.145 06:54:49 -- scripts/common.sh@392 -- # return 1 00:28:45.145 06:54:49 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:28:45.145 06:54:49 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:28:45.145 06:54:49 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:45.145 06:54:49 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:45.145 06:54:49 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:45.145 06:54:49 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:45.145 06:54:49 -- nvmf/common.sh@656 -- # echo 1 00:28:45.145 06:54:49 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:28:45.145 06:54:49 -- nvmf/common.sh@658 -- # echo 1 00:28:45.145 06:54:49 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:28:45.145 06:54:49 -- nvmf/common.sh@661 -- # echo tcp 00:28:45.145 06:54:49 -- nvmf/common.sh@662 -- # echo 4420 00:28:45.145 06:54:49 -- nvmf/common.sh@663 -- # echo ipv4 00:28:45.145 06:54:49 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:45.145 06:54:49 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:28:45.145 00:28:45.145 Discovery Log Number of Records 2, Generation counter 2 00:28:45.145 =====Discovery Log Entry 0====== 00:28:45.145 trtype: tcp 00:28:45.145 adrfam: ipv4 00:28:45.145 subtype: current discovery subsystem 00:28:45.145 treq: not specified, sq flow control disable supported 00:28:45.145 portid: 1 00:28:45.145 trsvcid: 4420 00:28:45.145 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:45.145 traddr: 10.0.0.1 00:28:45.145 eflags: none 00:28:45.145 sectype: none 00:28:45.145 =====Discovery Log Entry 1====== 00:28:45.145 trtype: tcp 00:28:45.145 adrfam: ipv4 00:28:45.146 subtype: nvme subsystem 00:28:45.146 treq: not specified, sq flow control disable supported 00:28:45.146 portid: 1 00:28:45.146 trsvcid: 4420 00:28:45.146 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:45.146 traddr: 10.0.0.1 00:28:45.146 eflags: none 00:28:45.146 sectype: none 00:28:45.146 06:54:49 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:45.146 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:45.146 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.146 ===================================================== 00:28:45.146 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:45.146 ===================================================== 00:28:45.146 Controller Capabilities/Features 00:28:45.146 ================================ 00:28:45.146 Vendor ID: 0000 00:28:45.146 Subsystem Vendor ID: 0000 00:28:45.146 Serial Number: 162feb1a92334d11d6c7 00:28:45.146 Model Number: Linux 00:28:45.146 Firmware Version: 6.7.0-68 00:28:45.146 Recommended Arb Burst: 0 00:28:45.146 IEEE OUI Identifier: 00 00 00 00:28:45.146 Multi-path I/O 00:28:45.146 May have multiple subsystem ports: No 00:28:45.146 May have multiple controllers: No 00:28:45.146 Associated with SR-IOV VF: No 00:28:45.146 Max Data Transfer Size: Unlimited 00:28:45.146 Max Number of Namespaces: 0 00:28:45.146 Max Number of I/O Queues: 1024 00:28:45.146 NVMe Specification Version (VS): 1.3 00:28:45.146 NVMe Specification Version (Identify): 1.3 00:28:45.146 Maximum Queue Entries: 1024 00:28:45.146 Contiguous Queues Required: No 00:28:45.146 Arbitration Mechanisms Supported 00:28:45.146 Weighted Round Robin: Not Supported 00:28:45.146 Vendor Specific: Not Supported 00:28:45.146 Reset Timeout: 7500 ms 00:28:45.146 Doorbell Stride: 4 bytes 00:28:45.146 NVM Subsystem Reset: Not Supported 00:28:45.146 Command Sets Supported 00:28:45.146 NVM Command Set: Supported 00:28:45.146 Boot Partition: Not Supported 00:28:45.146 Memory Page Size Minimum: 4096 bytes 00:28:45.146 Memory Page Size Maximum: 4096 bytes 00:28:45.146 Persistent Memory Region: Not Supported 00:28:45.146 Optional Asynchronous Events Supported 00:28:45.146 Namespace Attribute Notices: Not Supported 00:28:45.146 Firmware Activation Notices: Not Supported 00:28:45.146 ANA Change Notices: Not Supported 00:28:45.146 PLE Aggregate Log Change Notices: Not Supported 00:28:45.146 LBA Status Info Alert Notices: Not Supported 00:28:45.146 EGE Aggregate Log Change Notices: Not Supported 00:28:45.146 Normal NVM Subsystem Shutdown event: Not Supported 00:28:45.146 Zone Descriptor Change Notices: Not Supported 00:28:45.146 Discovery Log Change Notices: Supported 00:28:45.146 Controller Attributes 00:28:45.146 128-bit Host Identifier: Not Supported 00:28:45.146 Non-Operational Permissive Mode: Not Supported 00:28:45.146 NVM Sets: Not Supported 00:28:45.146 Read Recovery Levels: Not Supported 00:28:45.146 Endurance Groups: Not Supported 00:28:45.146 Predictable Latency Mode: Not Supported 00:28:45.146 Traffic Based Keep ALive: Not Supported 00:28:45.146 Namespace Granularity: Not Supported 00:28:45.146 SQ Associations: Not Supported 00:28:45.146 UUID List: Not Supported 00:28:45.146 Multi-Domain Subsystem: Not Supported 00:28:45.146 Fixed Capacity Management: Not Supported 00:28:45.146 Variable Capacity Management: Not Supported 00:28:45.146 Delete Endurance Group: Not Supported 00:28:45.146 Delete NVM Set: Not Supported 00:28:45.146 Extended LBA Formats Supported: Not Supported 00:28:45.146 Flexible Data Placement Supported: Not Supported 00:28:45.146 00:28:45.146 Controller Memory Buffer Support 00:28:45.146 ================================ 00:28:45.146 Supported: No 00:28:45.146 00:28:45.146 Persistent Memory Region Support 00:28:45.146 ================================ 00:28:45.146 Supported: No 00:28:45.146 00:28:45.146 Admin Command Set Attributes 00:28:45.146 ============================ 00:28:45.146 Security Send/Receive: Not Supported 00:28:45.146 Format NVM: Not Supported 00:28:45.146 Firmware Activate/Download: Not Supported 00:28:45.146 Namespace Management: Not Supported 00:28:45.146 Device Self-Test: Not Supported 00:28:45.146 Directives: Not Supported 00:28:45.146 NVMe-MI: Not Supported 00:28:45.146 Virtualization Management: Not Supported 00:28:45.146 Doorbell Buffer Config: Not Supported 00:28:45.146 Get LBA Status Capability: Not Supported 00:28:45.146 Command & Feature Lockdown Capability: Not Supported 00:28:45.146 Abort Command Limit: 1 00:28:45.146 Async Event Request Limit: 1 00:28:45.146 Number of Firmware Slots: N/A 00:28:45.146 Firmware Slot 1 Read-Only: N/A 00:28:45.146 Firmware Activation Without Reset: N/A 00:28:45.146 Multiple Update Detection Support: N/A 00:28:45.146 Firmware Update Granularity: No Information Provided 00:28:45.146 Per-Namespace SMART Log: No 00:28:45.146 Asymmetric Namespace Access Log Page: Not Supported 00:28:45.146 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:45.146 Command Effects Log Page: Not Supported 00:28:45.146 Get Log Page Extended Data: Supported 00:28:45.146 Telemetry Log Pages: Not Supported 00:28:45.146 Persistent Event Log Pages: Not Supported 00:28:45.146 Supported Log Pages Log Page: May Support 00:28:45.146 Commands Supported & Effects Log Page: Not Supported 00:28:45.146 Feature Identifiers & Effects Log Page:May Support 00:28:45.146 NVMe-MI Commands & Effects Log Page: May Support 00:28:45.146 Data Area 4 for Telemetry Log: Not Supported 00:28:45.146 Error Log Page Entries Supported: 1 00:28:45.146 Keep Alive: Not Supported 00:28:45.146 00:28:45.146 NVM Command Set Attributes 00:28:45.146 ========================== 00:28:45.146 Submission Queue Entry Size 00:28:45.146 Max: 1 00:28:45.146 Min: 1 00:28:45.146 Completion Queue Entry Size 00:28:45.146 Max: 1 00:28:45.146 Min: 1 00:28:45.146 Number of Namespaces: 0 00:28:45.146 Compare Command: Not Supported 00:28:45.146 Write Uncorrectable Command: Not Supported 00:28:45.146 Dataset Management Command: Not Supported 00:28:45.146 Write Zeroes Command: Not Supported 00:28:45.146 Set Features Save Field: Not Supported 00:28:45.146 Reservations: Not Supported 00:28:45.146 Timestamp: Not Supported 00:28:45.146 Copy: Not Supported 00:28:45.146 Volatile Write Cache: Not Present 00:28:45.146 Atomic Write Unit (Normal): 1 00:28:45.146 Atomic Write Unit (PFail): 1 00:28:45.146 Atomic Compare & Write Unit: 1 00:28:45.146 Fused Compare & Write: Not Supported 00:28:45.146 Scatter-Gather List 00:28:45.146 SGL Command Set: Supported 00:28:45.146 SGL Keyed: Not Supported 00:28:45.146 SGL Bit Bucket Descriptor: Not Supported 00:28:45.146 SGL Metadata Pointer: Not Supported 00:28:45.146 Oversized SGL: Not Supported 00:28:45.146 SGL Metadata Address: Not Supported 00:28:45.146 SGL Offset: Supported 00:28:45.146 Transport SGL Data Block: Not Supported 00:28:45.146 Replay Protected Memory Block: Not Supported 00:28:45.146 00:28:45.146 Firmware Slot Information 00:28:45.146 ========================= 00:28:45.146 Active slot: 0 00:28:45.146 00:28:45.146 00:28:45.146 Error Log 00:28:45.146 ========= 00:28:45.146 00:28:45.146 Active Namespaces 00:28:45.146 ================= 00:28:45.146 Discovery Log Page 00:28:45.146 ================== 00:28:45.146 Generation Counter: 2 00:28:45.146 Number of Records: 2 00:28:45.146 Record Format: 0 00:28:45.146 00:28:45.146 Discovery Log Entry 0 00:28:45.146 ---------------------- 00:28:45.146 Transport Type: 3 (TCP) 00:28:45.146 Address Family: 1 (IPv4) 00:28:45.146 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:45.146 Entry Flags: 00:28:45.146 Duplicate Returned Information: 0 00:28:45.146 Explicit Persistent Connection Support for Discovery: 0 00:28:45.146 Transport Requirements: 00:28:45.146 Secure Channel: Not Specified 00:28:45.146 Port ID: 1 (0x0001) 00:28:45.146 Controller ID: 65535 (0xffff) 00:28:45.146 Admin Max SQ Size: 32 00:28:45.146 Transport Service Identifier: 4420 00:28:45.146 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:45.146 Transport Address: 10.0.0.1 00:28:45.146 Discovery Log Entry 1 00:28:45.146 ---------------------- 00:28:45.146 Transport Type: 3 (TCP) 00:28:45.146 Address Family: 1 (IPv4) 00:28:45.146 Subsystem Type: 2 (NVM Subsystem) 00:28:45.146 Entry Flags: 00:28:45.146 Duplicate Returned Information: 0 00:28:45.146 Explicit Persistent Connection Support for Discovery: 0 00:28:45.146 Transport Requirements: 00:28:45.147 Secure Channel: Not Specified 00:28:45.147 Port ID: 1 (0x0001) 00:28:45.147 Controller ID: 65535 (0xffff) 00:28:45.147 Admin Max SQ Size: 32 00:28:45.147 Transport Service Identifier: 4420 00:28:45.147 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:45.147 Transport Address: 10.0.0.1 00:28:45.147 06:54:49 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:45.147 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.147 get_feature(0x01) failed 00:28:45.147 get_feature(0x02) failed 00:28:45.147 get_feature(0x04) failed 00:28:45.147 ===================================================== 00:28:45.147 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:45.147 ===================================================== 00:28:45.147 Controller Capabilities/Features 00:28:45.147 ================================ 00:28:45.147 Vendor ID: 0000 00:28:45.147 Subsystem Vendor ID: 0000 00:28:45.147 Serial Number: 858271fe60c8b50c509f 00:28:45.147 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:45.147 Firmware Version: 6.7.0-68 00:28:45.147 Recommended Arb Burst: 6 00:28:45.147 IEEE OUI Identifier: 00 00 00 00:28:45.147 Multi-path I/O 00:28:45.147 May have multiple subsystem ports: Yes 00:28:45.147 May have multiple controllers: Yes 00:28:45.147 Associated with SR-IOV VF: No 00:28:45.147 Max Data Transfer Size: Unlimited 00:28:45.147 Max Number of Namespaces: 1024 00:28:45.147 Max Number of I/O Queues: 128 00:28:45.147 NVMe Specification Version (VS): 1.3 00:28:45.147 NVMe Specification Version (Identify): 1.3 00:28:45.147 Maximum Queue Entries: 1024 00:28:45.147 Contiguous Queues Required: No 00:28:45.147 Arbitration Mechanisms Supported 00:28:45.147 Weighted Round Robin: Not Supported 00:28:45.147 Vendor Specific: Not Supported 00:28:45.147 Reset Timeout: 7500 ms 00:28:45.147 Doorbell Stride: 4 bytes 00:28:45.147 NVM Subsystem Reset: Not Supported 00:28:45.147 Command Sets Supported 00:28:45.147 NVM Command Set: Supported 00:28:45.147 Boot Partition: Not Supported 00:28:45.147 Memory Page Size Minimum: 4096 bytes 00:28:45.147 Memory Page Size Maximum: 4096 bytes 00:28:45.147 Persistent Memory Region: Not Supported 00:28:45.147 Optional Asynchronous Events Supported 00:28:45.147 Namespace Attribute Notices: Supported 00:28:45.147 Firmware Activation Notices: Not Supported 00:28:45.147 ANA Change Notices: Supported 00:28:45.147 PLE Aggregate Log Change Notices: Not Supported 00:28:45.147 LBA Status Info Alert Notices: Not Supported 00:28:45.147 EGE Aggregate Log Change Notices: Not Supported 00:28:45.147 Normal NVM Subsystem Shutdown event: Not Supported 00:28:45.147 Zone Descriptor Change Notices: Not Supported 00:28:45.147 Discovery Log Change Notices: Not Supported 00:28:45.147 Controller Attributes 00:28:45.147 128-bit Host Identifier: Supported 00:28:45.147 Non-Operational Permissive Mode: Not Supported 00:28:45.147 NVM Sets: Not Supported 00:28:45.147 Read Recovery Levels: Not Supported 00:28:45.147 Endurance Groups: Not Supported 00:28:45.147 Predictable Latency Mode: Not Supported 00:28:45.147 Traffic Based Keep ALive: Supported 00:28:45.147 Namespace Granularity: Not Supported 00:28:45.147 SQ Associations: Not Supported 00:28:45.147 UUID List: Not Supported 00:28:45.147 Multi-Domain Subsystem: Not Supported 00:28:45.147 Fixed Capacity Management: Not Supported 00:28:45.147 Variable Capacity Management: Not Supported 00:28:45.147 Delete Endurance Group: Not Supported 00:28:45.147 Delete NVM Set: Not Supported 00:28:45.147 Extended LBA Formats Supported: Not Supported 00:28:45.147 Flexible Data Placement Supported: Not Supported 00:28:45.147 00:28:45.147 Controller Memory Buffer Support 00:28:45.147 ================================ 00:28:45.147 Supported: No 00:28:45.147 00:28:45.147 Persistent Memory Region Support 00:28:45.147 ================================ 00:28:45.147 Supported: No 00:28:45.147 00:28:45.147 Admin Command Set Attributes 00:28:45.147 ============================ 00:28:45.147 Security Send/Receive: Not Supported 00:28:45.147 Format NVM: Not Supported 00:28:45.147 Firmware Activate/Download: Not Supported 00:28:45.147 Namespace Management: Not Supported 00:28:45.147 Device Self-Test: Not Supported 00:28:45.147 Directives: Not Supported 00:28:45.147 NVMe-MI: Not Supported 00:28:45.147 Virtualization Management: Not Supported 00:28:45.147 Doorbell Buffer Config: Not Supported 00:28:45.147 Get LBA Status Capability: Not Supported 00:28:45.147 Command & Feature Lockdown Capability: Not Supported 00:28:45.147 Abort Command Limit: 4 00:28:45.147 Async Event Request Limit: 4 00:28:45.147 Number of Firmware Slots: N/A 00:28:45.147 Firmware Slot 1 Read-Only: N/A 00:28:45.147 Firmware Activation Without Reset: N/A 00:28:45.147 Multiple Update Detection Support: N/A 00:28:45.147 Firmware Update Granularity: No Information Provided 00:28:45.147 Per-Namespace SMART Log: Yes 00:28:45.147 Asymmetric Namespace Access Log Page: Supported 00:28:45.147 ANA Transition Time : 10 sec 00:28:45.147 00:28:45.147 Asymmetric Namespace Access Capabilities 00:28:45.147 ANA Optimized State : Supported 00:28:45.147 ANA Non-Optimized State : Supported 00:28:45.147 ANA Inaccessible State : Supported 00:28:45.147 ANA Persistent Loss State : Supported 00:28:45.147 ANA Change State : Supported 00:28:45.147 ANAGRPID is not changed : No 00:28:45.147 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:45.147 00:28:45.147 ANA Group Identifier Maximum : 128 00:28:45.147 Number of ANA Group Identifiers : 128 00:28:45.147 Max Number of Allowed Namespaces : 1024 00:28:45.147 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:45.147 Command Effects Log Page: Supported 00:28:45.147 Get Log Page Extended Data: Supported 00:28:45.147 Telemetry Log Pages: Not Supported 00:28:45.147 Persistent Event Log Pages: Not Supported 00:28:45.147 Supported Log Pages Log Page: May Support 00:28:45.147 Commands Supported & Effects Log Page: Not Supported 00:28:45.147 Feature Identifiers & Effects Log Page:May Support 00:28:45.147 NVMe-MI Commands & Effects Log Page: May Support 00:28:45.147 Data Area 4 for Telemetry Log: Not Supported 00:28:45.147 Error Log Page Entries Supported: 128 00:28:45.147 Keep Alive: Supported 00:28:45.147 Keep Alive Granularity: 1000 ms 00:28:45.147 00:28:45.147 NVM Command Set Attributes 00:28:45.147 ========================== 00:28:45.147 Submission Queue Entry Size 00:28:45.147 Max: 64 00:28:45.147 Min: 64 00:28:45.147 Completion Queue Entry Size 00:28:45.147 Max: 16 00:28:45.147 Min: 16 00:28:45.147 Number of Namespaces: 1024 00:28:45.147 Compare Command: Not Supported 00:28:45.147 Write Uncorrectable Command: Not Supported 00:28:45.147 Dataset Management Command: Supported 00:28:45.147 Write Zeroes Command: Supported 00:28:45.147 Set Features Save Field: Not Supported 00:28:45.147 Reservations: Not Supported 00:28:45.147 Timestamp: Not Supported 00:28:45.147 Copy: Not Supported 00:28:45.147 Volatile Write Cache: Present 00:28:45.147 Atomic Write Unit (Normal): 1 00:28:45.147 Atomic Write Unit (PFail): 1 00:28:45.147 Atomic Compare & Write Unit: 1 00:28:45.147 Fused Compare & Write: Not Supported 00:28:45.147 Scatter-Gather List 00:28:45.147 SGL Command Set: Supported 00:28:45.147 SGL Keyed: Not Supported 00:28:45.147 SGL Bit Bucket Descriptor: Not Supported 00:28:45.147 SGL Metadata Pointer: Not Supported 00:28:45.147 Oversized SGL: Not Supported 00:28:45.147 SGL Metadata Address: Not Supported 00:28:45.147 SGL Offset: Supported 00:28:45.147 Transport SGL Data Block: Not Supported 00:28:45.147 Replay Protected Memory Block: Not Supported 00:28:45.147 00:28:45.147 Firmware Slot Information 00:28:45.147 ========================= 00:28:45.147 Active slot: 0 00:28:45.147 00:28:45.147 Asymmetric Namespace Access 00:28:45.147 =========================== 00:28:45.147 Change Count : 0 00:28:45.147 Number of ANA Group Descriptors : 1 00:28:45.147 ANA Group Descriptor : 0 00:28:45.147 ANA Group ID : 1 00:28:45.147 Number of NSID Values : 1 00:28:45.147 Change Count : 0 00:28:45.147 ANA State : 1 00:28:45.147 Namespace Identifier : 1 00:28:45.147 00:28:45.147 Commands Supported and Effects 00:28:45.148 ============================== 00:28:45.148 Admin Commands 00:28:45.148 -------------- 00:28:45.148 Get Log Page (02h): Supported 00:28:45.148 Identify (06h): Supported 00:28:45.148 Abort (08h): Supported 00:28:45.148 Set Features (09h): Supported 00:28:45.148 Get Features (0Ah): Supported 00:28:45.148 Asynchronous Event Request (0Ch): Supported 00:28:45.148 Keep Alive (18h): Supported 00:28:45.148 I/O Commands 00:28:45.148 ------------ 00:28:45.148 Flush (00h): Supported 00:28:45.148 Write (01h): Supported LBA-Change 00:28:45.148 Read (02h): Supported 00:28:45.148 Write Zeroes (08h): Supported LBA-Change 00:28:45.148 Dataset Management (09h): Supported 00:28:45.148 00:28:45.148 Error Log 00:28:45.148 ========= 00:28:45.148 Entry: 0 00:28:45.148 Error Count: 0x3 00:28:45.148 Submission Queue Id: 0x0 00:28:45.148 Command Id: 0x5 00:28:45.148 Phase Bit: 0 00:28:45.148 Status Code: 0x2 00:28:45.148 Status Code Type: 0x0 00:28:45.148 Do Not Retry: 1 00:28:45.148 Error Location: 0x28 00:28:45.148 LBA: 0x0 00:28:45.148 Namespace: 0x0 00:28:45.148 Vendor Log Page: 0x0 00:28:45.148 ----------- 00:28:45.148 Entry: 1 00:28:45.148 Error Count: 0x2 00:28:45.148 Submission Queue Id: 0x0 00:28:45.148 Command Id: 0x5 00:28:45.148 Phase Bit: 0 00:28:45.148 Status Code: 0x2 00:28:45.148 Status Code Type: 0x0 00:28:45.148 Do Not Retry: 1 00:28:45.148 Error Location: 0x28 00:28:45.148 LBA: 0x0 00:28:45.148 Namespace: 0x0 00:28:45.148 Vendor Log Page: 0x0 00:28:45.148 ----------- 00:28:45.148 Entry: 2 00:28:45.148 Error Count: 0x1 00:28:45.148 Submission Queue Id: 0x0 00:28:45.148 Command Id: 0x4 00:28:45.148 Phase Bit: 0 00:28:45.148 Status Code: 0x2 00:28:45.148 Status Code Type: 0x0 00:28:45.148 Do Not Retry: 1 00:28:45.148 Error Location: 0x28 00:28:45.148 LBA: 0x0 00:28:45.148 Namespace: 0x0 00:28:45.148 Vendor Log Page: 0x0 00:28:45.148 00:28:45.148 Number of Queues 00:28:45.148 ================ 00:28:45.148 Number of I/O Submission Queues: 128 00:28:45.148 Number of I/O Completion Queues: 128 00:28:45.148 00:28:45.148 ZNS Specific Controller Data 00:28:45.148 ============================ 00:28:45.148 Zone Append Size Limit: 0 00:28:45.148 00:28:45.148 00:28:45.148 Active Namespaces 00:28:45.148 ================= 00:28:45.148 get_feature(0x05) failed 00:28:45.148 Namespace ID:1 00:28:45.148 Command Set Identifier: NVM (00h) 00:28:45.148 Deallocate: Supported 00:28:45.148 Deallocated/Unwritten Error: Not Supported 00:28:45.148 Deallocated Read Value: Unknown 00:28:45.148 Deallocate in Write Zeroes: Not Supported 00:28:45.148 Deallocated Guard Field: 0xFFFF 00:28:45.148 Flush: Supported 00:28:45.148 Reservation: Not Supported 00:28:45.148 Namespace Sharing Capabilities: Multiple Controllers 00:28:45.148 Size (in LBAs): 1953525168 (931GiB) 00:28:45.148 Capacity (in LBAs): 1953525168 (931GiB) 00:28:45.148 Utilization (in LBAs): 1953525168 (931GiB) 00:28:45.148 UUID: 849878da-37a2-424a-ba2c-82132502c1b2 00:28:45.148 Thin Provisioning: Not Supported 00:28:45.148 Per-NS Atomic Units: Yes 00:28:45.148 Atomic Boundary Size (Normal): 0 00:28:45.148 Atomic Boundary Size (PFail): 0 00:28:45.148 Atomic Boundary Offset: 0 00:28:45.148 NGUID/EUI64 Never Reused: No 00:28:45.148 ANA group ID: 1 00:28:45.148 Namespace Write Protected: No 00:28:45.148 Number of LBA Formats: 1 00:28:45.148 Current LBA Format: LBA Format #00 00:28:45.148 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:45.148 00:28:45.148 06:54:49 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:45.148 06:54:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:45.148 06:54:49 -- nvmf/common.sh@117 -- # sync 00:28:45.148 06:54:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:45.148 06:54:49 -- nvmf/common.sh@120 -- # set +e 00:28:45.148 06:54:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:45.148 06:54:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:45.148 rmmod nvme_tcp 00:28:45.148 rmmod nvme_fabrics 00:28:45.148 06:54:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:45.148 06:54:49 -- nvmf/common.sh@124 -- # set -e 00:28:45.148 06:54:49 -- nvmf/common.sh@125 -- # return 0 00:28:45.148 06:54:49 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:28:45.148 06:54:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:28:45.148 06:54:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:45.148 06:54:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:45.148 06:54:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:45.148 06:54:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:45.148 06:54:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.148 06:54:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:45.148 06:54:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:47.681 06:54:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:47.681 06:54:51 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:47.681 06:54:51 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:47.681 06:54:51 -- nvmf/common.sh@675 -- # echo 0 00:28:47.681 06:54:51 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:47.681 06:54:51 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:47.681 06:54:51 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:47.681 06:54:51 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:47.681 06:54:51 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:28:47.681 06:54:51 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:28:47.681 06:54:51 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:48.247 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:48.247 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:48.247 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:48.247 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:48.247 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:48.247 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:48.247 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:48.247 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:48.247 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:48.247 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:48.247 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:48.247 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:48.247 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:48.247 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:48.247 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:48.247 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:49.624 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:28:49.624 00:28:49.624 real 0m9.056s 00:28:49.624 user 0m1.866s 00:28:49.624 sys 0m3.264s 00:28:49.624 06:54:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:49.624 06:54:53 -- common/autotest_common.sh@10 -- # set +x 00:28:49.624 ************************************ 00:28:49.624 END TEST nvmf_identify_kernel_target 00:28:49.624 ************************************ 00:28:49.624 06:54:54 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:49.624 06:54:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:49.624 06:54:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:49.624 06:54:54 -- common/autotest_common.sh@10 -- # set +x 00:28:49.624 ************************************ 00:28:49.624 START TEST nvmf_auth 00:28:49.624 ************************************ 00:28:49.624 06:54:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:49.624 * Looking for test storage... 00:28:49.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:49.624 06:54:54 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:49.624 06:54:54 -- nvmf/common.sh@7 -- # uname -s 00:28:49.624 06:54:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:49.624 06:54:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:49.624 06:54:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:49.624 06:54:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:49.624 06:54:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:49.624 06:54:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:49.624 06:54:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:49.624 06:54:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:49.624 06:54:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:49.624 06:54:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:49.624 06:54:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:49.624 06:54:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:49.624 06:54:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:49.624 06:54:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:49.624 06:54:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:49.624 06:54:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:49.624 06:54:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:49.624 06:54:54 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:49.624 06:54:54 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:49.624 06:54:54 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:49.624 06:54:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.624 06:54:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.624 06:54:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.624 06:54:54 -- paths/export.sh@5 -- # export PATH 00:28:49.624 06:54:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.624 06:54:54 -- nvmf/common.sh@47 -- # : 0 00:28:49.624 06:54:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:49.624 06:54:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:49.624 06:54:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:49.624 06:54:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:49.624 06:54:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:49.624 06:54:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:49.624 06:54:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:49.624 06:54:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:49.624 06:54:54 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:49.624 06:54:54 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:49.624 06:54:54 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:49.624 06:54:54 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:49.624 06:54:54 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:49.624 06:54:54 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:49.624 06:54:54 -- host/auth.sh@21 -- # keys=() 00:28:49.624 06:54:54 -- host/auth.sh@77 -- # nvmftestinit 00:28:49.624 06:54:54 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:49.624 06:54:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:49.624 06:54:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:49.624 06:54:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:49.624 06:54:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:49.625 06:54:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.625 06:54:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:49.625 06:54:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.625 06:54:54 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:28:49.625 06:54:54 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:28:49.625 06:54:54 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:49.625 06:54:54 -- common/autotest_common.sh@10 -- # set +x 00:28:51.523 06:54:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:51.523 06:54:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:51.523 06:54:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:51.523 06:54:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:51.523 06:54:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:51.523 06:54:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:51.523 06:54:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:51.523 06:54:55 -- nvmf/common.sh@295 -- # net_devs=() 00:28:51.523 06:54:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:51.523 06:54:55 -- nvmf/common.sh@296 -- # e810=() 00:28:51.523 06:54:55 -- nvmf/common.sh@296 -- # local -ga e810 00:28:51.523 06:54:55 -- nvmf/common.sh@297 -- # x722=() 00:28:51.523 06:54:55 -- nvmf/common.sh@297 -- # local -ga x722 00:28:51.523 06:54:55 -- nvmf/common.sh@298 -- # mlx=() 00:28:51.523 06:54:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:51.523 06:54:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:51.523 06:54:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:51.523 06:54:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:51.523 06:54:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:51.523 06:54:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:51.523 06:54:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:51.523 06:54:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:51.523 06:54:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:51.523 06:54:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:51.523 06:54:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:51.523 06:54:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:51.523 06:54:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:51.523 06:54:55 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:51.523 06:54:55 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:51.523 06:54:55 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:51.523 06:54:55 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:51.523 06:54:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:51.523 06:54:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:51.523 06:54:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:51.523 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:51.523 06:54:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:51.523 06:54:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:51.523 06:54:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.523 06:54:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.523 06:54:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:51.523 06:54:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:51.523 06:54:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:51.523 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:51.523 06:54:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:51.523 06:54:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:51.523 06:54:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.523 06:54:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.523 06:54:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:51.523 06:54:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:51.523 06:54:55 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:51.523 06:54:55 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:51.523 06:54:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:51.523 06:54:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.523 06:54:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:51.523 06:54:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.523 06:54:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:51.523 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:51.523 06:54:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.523 06:54:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:51.523 06:54:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.523 06:54:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:51.523 06:54:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.523 06:54:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:51.523 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:51.523 06:54:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.523 06:54:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:28:51.523 06:54:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:28:51.523 06:54:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:28:51.523 06:54:55 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:28:51.523 06:54:55 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:28:51.523 06:54:55 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:51.523 06:54:55 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:51.523 06:54:55 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:51.523 06:54:55 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:51.523 06:54:55 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:51.523 06:54:55 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:51.523 06:54:55 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:51.524 06:54:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:51.524 06:54:55 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:51.524 06:54:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:51.524 06:54:55 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:51.524 06:54:55 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:51.524 06:54:55 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:51.524 06:54:56 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:51.524 06:54:56 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:51.524 06:54:56 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:51.524 06:54:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:51.524 06:54:56 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:51.524 06:54:56 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:51.524 06:54:56 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:51.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:51.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:28:51.524 00:28:51.524 --- 10.0.0.2 ping statistics --- 00:28:51.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.524 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:28:51.524 06:54:56 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:51.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:51.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:28:51.524 00:28:51.524 --- 10.0.0.1 ping statistics --- 00:28:51.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.524 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:28:51.524 06:54:56 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:51.524 06:54:56 -- nvmf/common.sh@411 -- # return 0 00:28:51.524 06:54:56 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:28:51.524 06:54:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:51.524 06:54:56 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:51.524 06:54:56 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:51.524 06:54:56 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:51.524 06:54:56 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:51.524 06:54:56 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:51.782 06:54:56 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:28:51.782 06:54:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:51.782 06:54:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:51.782 06:54:56 -- common/autotest_common.sh@10 -- # set +x 00:28:51.782 06:54:56 -- nvmf/common.sh@470 -- # nvmfpid=104424 00:28:51.782 06:54:56 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:51.782 06:54:56 -- nvmf/common.sh@471 -- # waitforlisten 104424 00:28:51.782 06:54:56 -- common/autotest_common.sh@817 -- # '[' -z 104424 ']' 00:28:51.782 06:54:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:51.782 06:54:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:51.782 06:54:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:51.782 06:54:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:51.782 06:54:56 -- common/autotest_common.sh@10 -- # set +x 00:28:52.040 06:54:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:52.040 06:54:56 -- common/autotest_common.sh@850 -- # return 0 00:28:52.040 06:54:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:52.040 06:54:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:52.040 06:54:56 -- common/autotest_common.sh@10 -- # set +x 00:28:52.040 06:54:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:52.040 06:54:56 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:52.040 06:54:56 -- host/auth.sh@81 -- # gen_key null 32 00:28:52.040 06:54:56 -- host/auth.sh@53 -- # local digest len file key 00:28:52.040 06:54:56 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:52.040 06:54:56 -- host/auth.sh@54 -- # local -A digests 00:28:52.040 06:54:56 -- host/auth.sh@56 -- # digest=null 00:28:52.040 06:54:56 -- host/auth.sh@56 -- # len=32 00:28:52.040 06:54:56 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:52.040 06:54:56 -- host/auth.sh@57 -- # key=5f8a94e8636f262c8d960e67647295ac 00:28:52.040 06:54:56 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:28:52.040 06:54:56 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.X1U 00:28:52.040 06:54:56 -- host/auth.sh@59 -- # format_dhchap_key 5f8a94e8636f262c8d960e67647295ac 0 00:28:52.040 06:54:56 -- nvmf/common.sh@708 -- # format_key DHHC-1 5f8a94e8636f262c8d960e67647295ac 0 00:28:52.040 06:54:56 -- nvmf/common.sh@691 -- # local prefix key digest 00:28:52.040 06:54:56 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:28:52.041 06:54:56 -- nvmf/common.sh@693 -- # key=5f8a94e8636f262c8d960e67647295ac 00:28:52.041 06:54:56 -- nvmf/common.sh@693 -- # digest=0 00:28:52.041 06:54:56 -- nvmf/common.sh@694 -- # python - 00:28:52.041 06:54:56 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.X1U 00:28:52.041 06:54:56 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.X1U 00:28:52.041 06:54:56 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.X1U 00:28:52.041 06:54:56 -- host/auth.sh@82 -- # gen_key null 48 00:28:52.041 06:54:56 -- host/auth.sh@53 -- # local digest len file key 00:28:52.041 06:54:56 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:52.041 06:54:56 -- host/auth.sh@54 -- # local -A digests 00:28:52.041 06:54:56 -- host/auth.sh@56 -- # digest=null 00:28:52.041 06:54:56 -- host/auth.sh@56 -- # len=48 00:28:52.041 06:54:56 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:52.041 06:54:56 -- host/auth.sh@57 -- # key=c7082760101b4701ade10a3bebcc68c12990fee5c5407509 00:28:52.041 06:54:56 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:28:52.041 06:54:56 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.paX 00:28:52.041 06:54:56 -- host/auth.sh@59 -- # format_dhchap_key c7082760101b4701ade10a3bebcc68c12990fee5c5407509 0 00:28:52.041 06:54:56 -- nvmf/common.sh@708 -- # format_key DHHC-1 c7082760101b4701ade10a3bebcc68c12990fee5c5407509 0 00:28:52.041 06:54:56 -- nvmf/common.sh@691 -- # local prefix key digest 00:28:52.041 06:54:56 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:28:52.041 06:54:56 -- nvmf/common.sh@693 -- # key=c7082760101b4701ade10a3bebcc68c12990fee5c5407509 00:28:52.041 06:54:56 -- nvmf/common.sh@693 -- # digest=0 00:28:52.041 06:54:56 -- nvmf/common.sh@694 -- # python - 00:28:52.041 06:54:56 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.paX 00:28:52.041 06:54:56 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.paX 00:28:52.041 06:54:56 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.paX 00:28:52.041 06:54:56 -- host/auth.sh@83 -- # gen_key sha256 32 00:28:52.041 06:54:56 -- host/auth.sh@53 -- # local digest len file key 00:28:52.041 06:54:56 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:52.041 06:54:56 -- host/auth.sh@54 -- # local -A digests 00:28:52.041 06:54:56 -- host/auth.sh@56 -- # digest=sha256 00:28:52.041 06:54:56 -- host/auth.sh@56 -- # len=32 00:28:52.041 06:54:56 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:52.041 06:54:56 -- host/auth.sh@57 -- # key=f41530c3cdf6fd96dc8f36f1b68f076f 00:28:52.041 06:54:56 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:28:52.041 06:54:56 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.T5h 00:28:52.041 06:54:56 -- host/auth.sh@59 -- # format_dhchap_key f41530c3cdf6fd96dc8f36f1b68f076f 1 00:28:52.041 06:54:56 -- nvmf/common.sh@708 -- # format_key DHHC-1 f41530c3cdf6fd96dc8f36f1b68f076f 1 00:28:52.041 06:54:56 -- nvmf/common.sh@691 -- # local prefix key digest 00:28:52.041 06:54:56 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:28:52.041 06:54:56 -- nvmf/common.sh@693 -- # key=f41530c3cdf6fd96dc8f36f1b68f076f 00:28:52.041 06:54:56 -- nvmf/common.sh@693 -- # digest=1 00:28:52.041 06:54:56 -- nvmf/common.sh@694 -- # python - 00:28:52.041 06:54:56 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.T5h 00:28:52.041 06:54:56 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.T5h 00:28:52.041 06:54:56 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.T5h 00:28:52.041 06:54:56 -- host/auth.sh@84 -- # gen_key sha384 48 00:28:52.041 06:54:56 -- host/auth.sh@53 -- # local digest len file key 00:28:52.041 06:54:56 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:52.041 06:54:56 -- host/auth.sh@54 -- # local -A digests 00:28:52.041 06:54:56 -- host/auth.sh@56 -- # digest=sha384 00:28:52.041 06:54:56 -- host/auth.sh@56 -- # len=48 00:28:52.041 06:54:56 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:52.041 06:54:56 -- host/auth.sh@57 -- # key=dcc0fa68701ba4b7f1fff8c622d3c5e8dc3bb0bc7f97973a 00:28:52.041 06:54:56 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:28:52.041 06:54:56 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.r1Y 00:28:52.041 06:54:56 -- host/auth.sh@59 -- # format_dhchap_key dcc0fa68701ba4b7f1fff8c622d3c5e8dc3bb0bc7f97973a 2 00:28:52.041 06:54:56 -- nvmf/common.sh@708 -- # format_key DHHC-1 dcc0fa68701ba4b7f1fff8c622d3c5e8dc3bb0bc7f97973a 2 00:28:52.041 06:54:56 -- nvmf/common.sh@691 -- # local prefix key digest 00:28:52.041 06:54:56 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:28:52.041 06:54:56 -- nvmf/common.sh@693 -- # key=dcc0fa68701ba4b7f1fff8c622d3c5e8dc3bb0bc7f97973a 00:28:52.041 06:54:56 -- nvmf/common.sh@693 -- # digest=2 00:28:52.041 06:54:56 -- nvmf/common.sh@694 -- # python - 00:28:52.041 06:54:56 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.r1Y 00:28:52.041 06:54:56 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.r1Y 00:28:52.041 06:54:56 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.r1Y 00:28:52.041 06:54:56 -- host/auth.sh@85 -- # gen_key sha512 64 00:28:52.041 06:54:56 -- host/auth.sh@53 -- # local digest len file key 00:28:52.041 06:54:56 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:52.041 06:54:56 -- host/auth.sh@54 -- # local -A digests 00:28:52.041 06:54:56 -- host/auth.sh@56 -- # digest=sha512 00:28:52.041 06:54:56 -- host/auth.sh@56 -- # len=64 00:28:52.041 06:54:56 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:52.041 06:54:56 -- host/auth.sh@57 -- # key=232fd257e364bd34bd368e66fd474019de654643d43a39b23cff9ca6b0917abc 00:28:52.041 06:54:56 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:28:52.308 06:54:56 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.5rZ 00:28:52.308 06:54:56 -- host/auth.sh@59 -- # format_dhchap_key 232fd257e364bd34bd368e66fd474019de654643d43a39b23cff9ca6b0917abc 3 00:28:52.308 06:54:56 -- nvmf/common.sh@708 -- # format_key DHHC-1 232fd257e364bd34bd368e66fd474019de654643d43a39b23cff9ca6b0917abc 3 00:28:52.308 06:54:56 -- nvmf/common.sh@691 -- # local prefix key digest 00:28:52.308 06:54:56 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:28:52.308 06:54:56 -- nvmf/common.sh@693 -- # key=232fd257e364bd34bd368e66fd474019de654643d43a39b23cff9ca6b0917abc 00:28:52.308 06:54:56 -- nvmf/common.sh@693 -- # digest=3 00:28:52.308 06:54:56 -- nvmf/common.sh@694 -- # python - 00:28:52.308 06:54:56 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.5rZ 00:28:52.308 06:54:56 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.5rZ 00:28:52.308 06:54:56 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.5rZ 00:28:52.308 06:54:56 -- host/auth.sh@87 -- # waitforlisten 104424 00:28:52.308 06:54:56 -- common/autotest_common.sh@817 -- # '[' -z 104424 ']' 00:28:52.308 06:54:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:52.308 06:54:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:52.308 06:54:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:52.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:52.308 06:54:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:52.308 06:54:56 -- common/autotest_common.sh@10 -- # set +x 00:28:52.602 06:54:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:52.602 06:54:56 -- common/autotest_common.sh@850 -- # return 0 00:28:52.602 06:54:56 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:28:52.602 06:54:56 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.X1U 00:28:52.602 06:54:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:52.602 06:54:56 -- common/autotest_common.sh@10 -- # set +x 00:28:52.602 06:54:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:52.602 06:54:56 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:28:52.602 06:54:56 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.paX 00:28:52.602 06:54:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:52.602 06:54:56 -- common/autotest_common.sh@10 -- # set +x 00:28:52.602 06:54:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:52.602 06:54:56 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:28:52.602 06:54:56 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.T5h 00:28:52.602 06:54:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:52.602 06:54:56 -- common/autotest_common.sh@10 -- # set +x 00:28:52.602 06:54:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:52.602 06:54:56 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:28:52.602 06:54:56 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.r1Y 00:28:52.602 06:54:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:52.602 06:54:56 -- common/autotest_common.sh@10 -- # set +x 00:28:52.602 06:54:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:52.602 06:54:56 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:28:52.602 06:54:56 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.5rZ 00:28:52.602 06:54:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:52.602 06:54:56 -- common/autotest_common.sh@10 -- # set +x 00:28:52.602 06:54:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:52.602 06:54:56 -- host/auth.sh@92 -- # nvmet_auth_init 00:28:52.602 06:54:56 -- host/auth.sh@35 -- # get_main_ns_ip 00:28:52.602 06:54:56 -- nvmf/common.sh@717 -- # local ip 00:28:52.602 06:54:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:52.602 06:54:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:52.602 06:54:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.602 06:54:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.602 06:54:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:52.602 06:54:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.602 06:54:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:52.602 06:54:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:52.602 06:54:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:52.602 06:54:56 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:52.602 06:54:56 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:52.602 06:54:56 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:28:52.602 06:54:56 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:52.602 06:54:56 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:52.602 06:54:56 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:52.602 06:54:56 -- nvmf/common.sh@628 -- # local block nvme 00:28:52.602 06:54:56 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:28:52.602 06:54:56 -- nvmf/common.sh@631 -- # modprobe nvmet 00:28:52.602 06:54:56 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:52.602 06:54:56 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:53.536 Waiting for block devices as requested 00:28:53.794 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:28:53.794 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:53.794 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:53.794 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:54.052 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:54.052 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:54.052 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:54.052 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:54.310 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:54.310 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:54.310 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:54.310 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:54.568 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:54.568 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:54.568 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:54.826 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:54.826 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:55.392 06:54:59 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:28:55.392 06:54:59 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:55.392 06:54:59 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:28:55.392 06:54:59 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:28:55.392 06:54:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:55.392 06:54:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:55.392 06:54:59 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:28:55.392 06:54:59 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:55.392 06:54:59 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:55.392 No valid GPT data, bailing 00:28:55.392 06:54:59 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:55.392 06:54:59 -- scripts/common.sh@391 -- # pt= 00:28:55.392 06:54:59 -- scripts/common.sh@392 -- # return 1 00:28:55.392 06:54:59 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:28:55.392 06:54:59 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:28:55.392 06:54:59 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:55.392 06:54:59 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:55.392 06:54:59 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:55.392 06:54:59 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:55.392 06:54:59 -- nvmf/common.sh@656 -- # echo 1 00:28:55.392 06:54:59 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:28:55.392 06:54:59 -- nvmf/common.sh@658 -- # echo 1 00:28:55.392 06:54:59 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:28:55.392 06:54:59 -- nvmf/common.sh@661 -- # echo tcp 00:28:55.392 06:54:59 -- nvmf/common.sh@662 -- # echo 4420 00:28:55.392 06:54:59 -- nvmf/common.sh@663 -- # echo ipv4 00:28:55.392 06:54:59 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:55.392 06:54:59 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:28:55.392 00:28:55.392 Discovery Log Number of Records 2, Generation counter 2 00:28:55.392 =====Discovery Log Entry 0====== 00:28:55.392 trtype: tcp 00:28:55.392 adrfam: ipv4 00:28:55.392 subtype: current discovery subsystem 00:28:55.392 treq: not specified, sq flow control disable supported 00:28:55.392 portid: 1 00:28:55.392 trsvcid: 4420 00:28:55.392 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:55.392 traddr: 10.0.0.1 00:28:55.392 eflags: none 00:28:55.392 sectype: none 00:28:55.392 =====Discovery Log Entry 1====== 00:28:55.392 trtype: tcp 00:28:55.392 adrfam: ipv4 00:28:55.392 subtype: nvme subsystem 00:28:55.392 treq: not specified, sq flow control disable supported 00:28:55.392 portid: 1 00:28:55.392 trsvcid: 4420 00:28:55.392 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:55.392 traddr: 10.0.0.1 00:28:55.392 eflags: none 00:28:55.392 sectype: none 00:28:55.392 06:54:59 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:55.392 06:54:59 -- host/auth.sh@37 -- # echo 0 00:28:55.392 06:54:59 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:55.392 06:54:59 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:55.392 06:54:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:55.392 06:54:59 -- host/auth.sh@44 -- # digest=sha256 00:28:55.392 06:54:59 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:55.392 06:54:59 -- host/auth.sh@44 -- # keyid=1 00:28:55.392 06:54:59 -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:28:55.392 06:54:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:55.392 06:54:59 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:55.392 06:54:59 -- host/auth.sh@49 -- # echo DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:28:55.392 06:54:59 -- host/auth.sh@100 -- # IFS=, 00:28:55.392 06:54:59 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:28:55.392 06:54:59 -- host/auth.sh@100 -- # IFS=, 00:28:55.392 06:54:59 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:55.392 06:54:59 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:55.392 06:54:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:55.392 06:54:59 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:28:55.392 06:54:59 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:55.392 06:54:59 -- host/auth.sh@68 -- # keyid=1 00:28:55.392 06:54:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:55.392 06:54:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.392 06:54:59 -- common/autotest_common.sh@10 -- # set +x 00:28:55.392 06:54:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.392 06:54:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:55.392 06:54:59 -- nvmf/common.sh@717 -- # local ip 00:28:55.392 06:54:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:55.392 06:54:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:55.392 06:54:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.392 06:54:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.392 06:54:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:55.392 06:54:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.392 06:54:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:55.392 06:54:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:55.392 06:54:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:55.392 06:54:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:55.392 06:54:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.392 06:54:59 -- common/autotest_common.sh@10 -- # set +x 00:28:55.392 nvme0n1 00:28:55.392 06:54:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.392 06:54:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.392 06:54:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.392 06:54:59 -- common/autotest_common.sh@10 -- # set +x 00:28:55.392 06:54:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:55.392 06:54:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.392 06:54:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.392 06:54:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.392 06:54:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.392 06:54:59 -- common/autotest_common.sh@10 -- # set +x 00:28:55.650 06:54:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.650 06:54:59 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:28:55.650 06:55:00 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:55.650 06:55:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:55.650 06:55:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:55.650 06:55:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:55.650 06:55:00 -- host/auth.sh@44 -- # digest=sha256 00:28:55.650 06:55:00 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:55.650 06:55:00 -- host/auth.sh@44 -- # keyid=0 00:28:55.650 06:55:00 -- host/auth.sh@45 -- # key=DHHC-1:00:NWY4YTk0ZTg2MzZmMjYyYzhkOTYwZTY3NjQ3Mjk1YWO8+KB+: 00:28:55.650 06:55:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:55.650 06:55:00 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:55.650 06:55:00 -- host/auth.sh@49 -- # echo DHHC-1:00:NWY4YTk0ZTg2MzZmMjYyYzhkOTYwZTY3NjQ3Mjk1YWO8+KB+: 00:28:55.650 06:55:00 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:28:55.650 06:55:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:55.650 06:55:00 -- host/auth.sh@68 -- # digest=sha256 00:28:55.650 06:55:00 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:55.650 06:55:00 -- host/auth.sh@68 -- # keyid=0 00:28:55.650 06:55:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:55.650 06:55:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.650 06:55:00 -- common/autotest_common.sh@10 -- # set +x 00:28:55.650 06:55:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.650 06:55:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:55.650 06:55:00 -- nvmf/common.sh@717 -- # local ip 00:28:55.650 06:55:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:55.650 06:55:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:55.650 06:55:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.650 06:55:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.650 06:55:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:55.650 06:55:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.650 06:55:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:55.650 06:55:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:55.650 06:55:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:55.650 06:55:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:55.651 06:55:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.651 06:55:00 -- common/autotest_common.sh@10 -- # set +x 00:28:55.651 nvme0n1 00:28:55.651 06:55:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.651 06:55:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.651 06:55:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:55.651 06:55:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.651 06:55:00 -- common/autotest_common.sh@10 -- # set +x 00:28:55.651 06:55:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.651 06:55:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.651 06:55:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.651 06:55:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.651 06:55:00 -- common/autotest_common.sh@10 -- # set +x 00:28:55.651 06:55:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.651 06:55:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:55.651 06:55:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:55.651 06:55:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:55.651 06:55:00 -- host/auth.sh@44 -- # digest=sha256 00:28:55.651 06:55:00 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:55.651 06:55:00 -- host/auth.sh@44 -- # keyid=1 00:28:55.651 06:55:00 -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:28:55.651 06:55:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:55.651 06:55:00 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:55.651 06:55:00 -- host/auth.sh@49 -- # echo DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:28:55.651 06:55:00 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:28:55.651 06:55:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:55.651 06:55:00 -- host/auth.sh@68 -- # digest=sha256 00:28:55.651 06:55:00 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:55.651 06:55:00 -- host/auth.sh@68 -- # keyid=1 00:28:55.651 06:55:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:55.651 06:55:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.651 06:55:00 -- common/autotest_common.sh@10 -- # set +x 00:28:55.651 06:55:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.651 06:55:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:55.651 06:55:00 -- nvmf/common.sh@717 -- # local ip 00:28:55.651 06:55:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:55.651 06:55:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:55.651 06:55:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.651 06:55:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.651 06:55:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:55.651 06:55:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.651 06:55:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:55.651 06:55:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:55.651 06:55:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:55.651 06:55:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:55.651 06:55:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.651 06:55:00 -- common/autotest_common.sh@10 -- # set +x 00:28:55.909 nvme0n1 00:28:55.909 06:55:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.909 06:55:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.909 06:55:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.909 06:55:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:55.909 06:55:00 -- common/autotest_common.sh@10 -- # set +x 00:28:55.909 06:55:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.909 06:55:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.909 06:55:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.909 06:55:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.909 06:55:00 -- common/autotest_common.sh@10 -- # set +x 00:28:55.909 06:55:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.909 06:55:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:55.909 06:55:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:55.909 06:55:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:55.909 06:55:00 -- host/auth.sh@44 -- # digest=sha256 00:28:55.909 06:55:00 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:55.909 06:55:00 -- host/auth.sh@44 -- # keyid=2 00:28:55.909 06:55:00 -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQxNTMwYzNjZGY2ZmQ5NmRjOGYzNmYxYjY4ZjA3NmYINuWZ: 00:28:55.909 06:55:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:55.909 06:55:00 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:55.909 06:55:00 -- host/auth.sh@49 -- # echo DHHC-1:01:ZjQxNTMwYzNjZGY2ZmQ5NmRjOGYzNmYxYjY4ZjA3NmYINuWZ: 00:28:55.909 06:55:00 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:28:55.909 06:55:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:55.909 06:55:00 -- host/auth.sh@68 -- # digest=sha256 00:28:55.909 06:55:00 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:55.909 06:55:00 -- host/auth.sh@68 -- # keyid=2 00:28:55.909 06:55:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:55.909 06:55:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.909 06:55:00 -- common/autotest_common.sh@10 -- # set +x 00:28:55.909 06:55:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.909 06:55:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:55.909 06:55:00 -- nvmf/common.sh@717 -- # local ip 00:28:55.909 06:55:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:55.909 06:55:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:55.909 06:55:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.909 06:55:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.909 06:55:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:55.909 06:55:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.909 06:55:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:55.909 06:55:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:55.909 06:55:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:55.909 06:55:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:55.909 06:55:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.909 06:55:00 -- common/autotest_common.sh@10 -- # set +x 00:28:55.909 nvme0n1 00:28:55.909 06:55:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.909 06:55:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.909 06:55:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.909 06:55:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:55.909 06:55:00 -- common/autotest_common.sh@10 -- # set +x 00:28:56.168 06:55:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.168 06:55:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.168 06:55:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.168 06:55:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.168 06:55:00 -- common/autotest_common.sh@10 -- # set +x 00:28:56.168 06:55:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.168 06:55:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:56.168 06:55:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:56.168 06:55:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:56.168 06:55:00 -- host/auth.sh@44 -- # digest=sha256 00:28:56.168 06:55:00 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:56.168 06:55:00 -- host/auth.sh@44 -- # keyid=3 00:28:56.168 06:55:00 -- host/auth.sh@45 -- # key=DHHC-1:02:ZGNjMGZhNjg3MDFiYTRiN2YxZmZmOGM2MjJkM2M1ZThkYzNiYjBiYzdmOTc5NzNh3+M1UQ==: 00:28:56.168 06:55:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:56.168 06:55:00 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:56.168 06:55:00 -- host/auth.sh@49 -- # echo DHHC-1:02:ZGNjMGZhNjg3MDFiYTRiN2YxZmZmOGM2MjJkM2M1ZThkYzNiYjBiYzdmOTc5NzNh3+M1UQ==: 00:28:56.168 06:55:00 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:28:56.168 06:55:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:56.168 06:55:00 -- host/auth.sh@68 -- # digest=sha256 00:28:56.168 06:55:00 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:56.168 06:55:00 -- host/auth.sh@68 -- # keyid=3 00:28:56.168 06:55:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:56.168 06:55:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.168 06:55:00 -- common/autotest_common.sh@10 -- # set +x 00:28:56.168 06:55:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.168 06:55:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:56.168 06:55:00 -- nvmf/common.sh@717 -- # local ip 00:28:56.168 06:55:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:56.168 06:55:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:56.168 06:55:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.168 06:55:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.168 06:55:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:56.168 06:55:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.168 06:55:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:56.168 06:55:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:56.168 06:55:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:56.168 06:55:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:56.168 06:55:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.168 06:55:00 -- common/autotest_common.sh@10 -- # set +x 00:28:56.168 nvme0n1 00:28:56.168 06:55:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.168 06:55:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.168 06:55:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.168 06:55:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:56.168 06:55:00 -- common/autotest_common.sh@10 -- # set +x 00:28:56.168 06:55:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.168 06:55:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.168 06:55:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.168 06:55:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.168 06:55:00 -- common/autotest_common.sh@10 -- # set +x 00:28:56.168 06:55:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.168 06:55:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:56.168 06:55:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:56.168 06:55:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:56.168 06:55:00 -- host/auth.sh@44 -- # digest=sha256 00:28:56.168 06:55:00 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:56.168 06:55:00 -- host/auth.sh@44 -- # keyid=4 00:28:56.168 06:55:00 -- host/auth.sh@45 -- # key=DHHC-1:03:MjMyZmQyNTdlMzY0YmQzNGJkMzY4ZTY2ZmQ0NzQwMTlkZTY1NDY0M2Q0M2EzOWIyM2NmZjljYTZiMDkxN2FiYz3zi2k=: 00:28:56.168 06:55:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:56.168 06:55:00 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:56.168 06:55:00 -- host/auth.sh@49 -- # echo DHHC-1:03:MjMyZmQyNTdlMzY0YmQzNGJkMzY4ZTY2ZmQ0NzQwMTlkZTY1NDY0M2Q0M2EzOWIyM2NmZjljYTZiMDkxN2FiYz3zi2k=: 00:28:56.168 06:55:00 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:28:56.168 06:55:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:56.168 06:55:00 -- host/auth.sh@68 -- # digest=sha256 00:28:56.168 06:55:00 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:56.168 06:55:00 -- host/auth.sh@68 -- # keyid=4 00:28:56.168 06:55:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:56.168 06:55:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.168 06:55:00 -- common/autotest_common.sh@10 -- # set +x 00:28:56.168 06:55:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.168 06:55:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:56.168 06:55:00 -- nvmf/common.sh@717 -- # local ip 00:28:56.168 06:55:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:56.168 06:55:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:56.168 06:55:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.168 06:55:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.168 06:55:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:56.168 06:55:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.168 06:55:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:56.168 06:55:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:56.168 06:55:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:56.168 06:55:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:56.168 06:55:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.168 06:55:00 -- common/autotest_common.sh@10 -- # set +x 00:28:56.425 nvme0n1 00:28:56.425 06:55:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.425 06:55:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.425 06:55:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:56.425 06:55:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.425 06:55:00 -- common/autotest_common.sh@10 -- # set +x 00:28:56.425 06:55:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.426 06:55:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.426 06:55:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.426 06:55:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.426 06:55:00 -- common/autotest_common.sh@10 -- # set +x 00:28:56.426 06:55:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.426 06:55:00 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:56.426 06:55:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:56.426 06:55:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:56.426 06:55:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:56.426 06:55:00 -- host/auth.sh@44 -- # digest=sha256 00:28:56.426 06:55:00 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:56.426 06:55:00 -- host/auth.sh@44 -- # keyid=0 00:28:56.426 06:55:00 -- host/auth.sh@45 -- # key=DHHC-1:00:NWY4YTk0ZTg2MzZmMjYyYzhkOTYwZTY3NjQ3Mjk1YWO8+KB+: 00:28:56.426 06:55:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:56.426 06:55:00 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:56.426 06:55:00 -- host/auth.sh@49 -- # echo DHHC-1:00:NWY4YTk0ZTg2MzZmMjYyYzhkOTYwZTY3NjQ3Mjk1YWO8+KB+: 00:28:56.426 06:55:00 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:28:56.426 06:55:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:56.426 06:55:00 -- host/auth.sh@68 -- # digest=sha256 00:28:56.426 06:55:00 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:56.426 06:55:00 -- host/auth.sh@68 -- # keyid=0 00:28:56.426 06:55:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:56.426 06:55:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.426 06:55:00 -- common/autotest_common.sh@10 -- # set +x 00:28:56.426 06:55:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.426 06:55:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:56.426 06:55:00 -- nvmf/common.sh@717 -- # local ip 00:28:56.426 06:55:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:56.426 06:55:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:56.426 06:55:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.426 06:55:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.426 06:55:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:56.426 06:55:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.426 06:55:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:56.426 06:55:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:56.426 06:55:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:56.426 06:55:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:56.426 06:55:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.426 06:55:00 -- common/autotest_common.sh@10 -- # set +x 00:28:56.684 nvme0n1 00:28:56.684 06:55:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.684 06:55:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.684 06:55:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.684 06:55:01 -- common/autotest_common.sh@10 -- # set +x 00:28:56.684 06:55:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:56.684 06:55:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.684 06:55:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.684 06:55:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.684 06:55:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.684 06:55:01 -- common/autotest_common.sh@10 -- # set +x 00:28:56.684 06:55:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.684 06:55:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:56.684 06:55:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:56.684 06:55:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:56.684 06:55:01 -- host/auth.sh@44 -- # digest=sha256 00:28:56.684 06:55:01 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:56.684 06:55:01 -- host/auth.sh@44 -- # keyid=1 00:28:56.684 06:55:01 -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:28:56.684 06:55:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:56.684 06:55:01 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:56.684 06:55:01 -- host/auth.sh@49 -- # echo DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:28:56.684 06:55:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:28:56.684 06:55:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:56.684 06:55:01 -- host/auth.sh@68 -- # digest=sha256 00:28:56.684 06:55:01 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:56.684 06:55:01 -- host/auth.sh@68 -- # keyid=1 00:28:56.684 06:55:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:56.684 06:55:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.684 06:55:01 -- common/autotest_common.sh@10 -- # set +x 00:28:56.684 06:55:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.684 06:55:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:56.684 06:55:01 -- nvmf/common.sh@717 -- # local ip 00:28:56.684 06:55:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:56.684 06:55:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:56.684 06:55:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.684 06:55:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.684 06:55:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:56.684 06:55:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.684 06:55:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:56.684 06:55:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:56.684 06:55:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:56.684 06:55:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:56.684 06:55:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.684 06:55:01 -- common/autotest_common.sh@10 -- # set +x 00:28:56.942 nvme0n1 00:28:56.942 06:55:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.942 06:55:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.942 06:55:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.942 06:55:01 -- common/autotest_common.sh@10 -- # set +x 00:28:56.942 06:55:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:56.942 06:55:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.942 06:55:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.942 06:55:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.942 06:55:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.942 06:55:01 -- common/autotest_common.sh@10 -- # set +x 00:28:56.942 06:55:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.942 06:55:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:56.942 06:55:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:56.942 06:55:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:56.942 06:55:01 -- host/auth.sh@44 -- # digest=sha256 00:28:56.942 06:55:01 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:56.942 06:55:01 -- host/auth.sh@44 -- # keyid=2 00:28:56.942 06:55:01 -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQxNTMwYzNjZGY2ZmQ5NmRjOGYzNmYxYjY4ZjA3NmYINuWZ: 00:28:56.942 06:55:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:56.942 06:55:01 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:56.942 06:55:01 -- host/auth.sh@49 -- # echo DHHC-1:01:ZjQxNTMwYzNjZGY2ZmQ5NmRjOGYzNmYxYjY4ZjA3NmYINuWZ: 00:28:56.942 06:55:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:28:56.942 06:55:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:56.942 06:55:01 -- host/auth.sh@68 -- # digest=sha256 00:28:56.942 06:55:01 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:56.942 06:55:01 -- host/auth.sh@68 -- # keyid=2 00:28:56.942 06:55:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:56.942 06:55:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.942 06:55:01 -- common/autotest_common.sh@10 -- # set +x 00:28:56.942 06:55:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.942 06:55:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:56.942 06:55:01 -- nvmf/common.sh@717 -- # local ip 00:28:56.942 06:55:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:56.942 06:55:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:56.942 06:55:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.942 06:55:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.942 06:55:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:56.942 06:55:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.942 06:55:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:56.942 06:55:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:56.942 06:55:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:56.942 06:55:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:56.942 06:55:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.942 06:55:01 -- common/autotest_common.sh@10 -- # set +x 00:28:57.200 nvme0n1 00:28:57.200 06:55:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:57.200 06:55:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.200 06:55:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:57.200 06:55:01 -- common/autotest_common.sh@10 -- # set +x 00:28:57.200 06:55:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:57.200 06:55:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:57.200 06:55:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.200 06:55:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.200 06:55:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:57.200 06:55:01 -- common/autotest_common.sh@10 -- # set +x 00:28:57.200 06:55:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:57.200 06:55:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:57.200 06:55:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:57.200 06:55:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:57.200 06:55:01 -- host/auth.sh@44 -- # digest=sha256 00:28:57.200 06:55:01 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:57.200 06:55:01 -- host/auth.sh@44 -- # keyid=3 00:28:57.200 06:55:01 -- host/auth.sh@45 -- # key=DHHC-1:02:ZGNjMGZhNjg3MDFiYTRiN2YxZmZmOGM2MjJkM2M1ZThkYzNiYjBiYzdmOTc5NzNh3+M1UQ==: 00:28:57.200 06:55:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:57.200 06:55:01 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:57.200 06:55:01 -- host/auth.sh@49 -- # echo DHHC-1:02:ZGNjMGZhNjg3MDFiYTRiN2YxZmZmOGM2MjJkM2M1ZThkYzNiYjBiYzdmOTc5NzNh3+M1UQ==: 00:28:57.200 06:55:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:28:57.200 06:55:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:57.200 06:55:01 -- host/auth.sh@68 -- # digest=sha256 00:28:57.200 06:55:01 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:57.200 06:55:01 -- host/auth.sh@68 -- # keyid=3 00:28:57.200 06:55:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:57.200 06:55:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:57.200 06:55:01 -- common/autotest_common.sh@10 -- # set +x 00:28:57.200 06:55:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:57.200 06:55:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:57.200 06:55:01 -- nvmf/common.sh@717 -- # local ip 00:28:57.200 06:55:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:57.200 06:55:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:57.200 06:55:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.200 06:55:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.200 06:55:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:57.200 06:55:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.200 06:55:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:57.200 06:55:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:57.200 06:55:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:57.200 06:55:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:57.200 06:55:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:57.200 06:55:01 -- common/autotest_common.sh@10 -- # set +x 00:28:57.200 nvme0n1 00:28:57.200 06:55:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:57.200 06:55:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.200 06:55:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:57.200 06:55:01 -- common/autotest_common.sh@10 -- # set +x 00:28:57.200 06:55:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:57.200 06:55:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:57.458 06:55:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.458 06:55:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.458 06:55:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:57.458 06:55:01 -- common/autotest_common.sh@10 -- # set +x 00:28:57.458 06:55:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:57.458 06:55:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:57.458 06:55:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:57.458 06:55:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:57.458 06:55:01 -- host/auth.sh@44 -- # digest=sha256 00:28:57.458 06:55:01 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:57.458 06:55:01 -- host/auth.sh@44 -- # keyid=4 00:28:57.458 06:55:01 -- host/auth.sh@45 -- # key=DHHC-1:03:MjMyZmQyNTdlMzY0YmQzNGJkMzY4ZTY2ZmQ0NzQwMTlkZTY1NDY0M2Q0M2EzOWIyM2NmZjljYTZiMDkxN2FiYz3zi2k=: 00:28:57.458 06:55:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:57.458 06:55:01 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:57.458 06:55:01 -- host/auth.sh@49 -- # echo DHHC-1:03:MjMyZmQyNTdlMzY0YmQzNGJkMzY4ZTY2ZmQ0NzQwMTlkZTY1NDY0M2Q0M2EzOWIyM2NmZjljYTZiMDkxN2FiYz3zi2k=: 00:28:57.458 06:55:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:28:57.458 06:55:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:57.458 06:55:01 -- host/auth.sh@68 -- # digest=sha256 00:28:57.458 06:55:01 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:57.458 06:55:01 -- host/auth.sh@68 -- # keyid=4 00:28:57.458 06:55:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:57.458 06:55:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:57.458 06:55:01 -- common/autotest_common.sh@10 -- # set +x 00:28:57.458 06:55:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:57.458 06:55:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:57.458 06:55:01 -- nvmf/common.sh@717 -- # local ip 00:28:57.458 06:55:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:57.458 06:55:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:57.458 06:55:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.458 06:55:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.458 06:55:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:57.458 06:55:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.458 06:55:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:57.458 06:55:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:57.458 06:55:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:57.458 06:55:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:57.458 06:55:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:57.458 06:55:01 -- common/autotest_common.sh@10 -- # set +x 00:28:57.458 nvme0n1 00:28:57.458 06:55:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:57.458 06:55:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.458 06:55:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:57.458 06:55:01 -- common/autotest_common.sh@10 -- # set +x 00:28:57.458 06:55:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:57.458 06:55:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:57.458 06:55:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.458 06:55:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.458 06:55:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:57.458 06:55:02 -- common/autotest_common.sh@10 -- # set +x 00:28:57.458 06:55:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:57.458 06:55:02 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:57.458 06:55:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:57.458 06:55:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:57.458 06:55:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:57.458 06:55:02 -- host/auth.sh@44 -- # digest=sha256 00:28:57.458 06:55:02 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:57.458 06:55:02 -- host/auth.sh@44 -- # keyid=0 00:28:57.458 06:55:02 -- host/auth.sh@45 -- # key=DHHC-1:00:NWY4YTk0ZTg2MzZmMjYyYzhkOTYwZTY3NjQ3Mjk1YWO8+KB+: 00:28:57.458 06:55:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:57.458 06:55:02 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:57.458 06:55:02 -- host/auth.sh@49 -- # echo DHHC-1:00:NWY4YTk0ZTg2MzZmMjYyYzhkOTYwZTY3NjQ3Mjk1YWO8+KB+: 00:28:57.458 06:55:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:28:57.458 06:55:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:57.458 06:55:02 -- host/auth.sh@68 -- # digest=sha256 00:28:57.458 06:55:02 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:57.458 06:55:02 -- host/auth.sh@68 -- # keyid=0 00:28:57.458 06:55:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:57.458 06:55:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:57.458 06:55:02 -- common/autotest_common.sh@10 -- # set +x 00:28:57.458 06:55:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:57.458 06:55:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:57.458 06:55:02 -- nvmf/common.sh@717 -- # local ip 00:28:57.458 06:55:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:57.458 06:55:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:57.458 06:55:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.458 06:55:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.458 06:55:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:57.458 06:55:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.458 06:55:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:57.458 06:55:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:57.458 06:55:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:57.458 06:55:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:57.458 06:55:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:57.458 06:55:02 -- common/autotest_common.sh@10 -- # set +x 00:28:57.716 nvme0n1 00:28:57.716 06:55:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:57.716 06:55:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.716 06:55:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:57.716 06:55:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:57.716 06:55:02 -- common/autotest_common.sh@10 -- # set +x 00:28:57.716 06:55:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:57.974 06:55:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.974 06:55:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.974 06:55:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:57.974 06:55:02 -- common/autotest_common.sh@10 -- # set +x 00:28:57.974 06:55:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:57.974 06:55:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:57.974 06:55:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:57.974 06:55:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:57.974 06:55:02 -- host/auth.sh@44 -- # digest=sha256 00:28:57.974 06:55:02 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:57.974 06:55:02 -- host/auth.sh@44 -- # keyid=1 00:28:57.974 06:55:02 -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:28:57.974 06:55:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:57.974 06:55:02 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:57.974 06:55:02 -- host/auth.sh@49 -- # echo DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:28:57.974 06:55:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:28:57.974 06:55:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:57.974 06:55:02 -- host/auth.sh@68 -- # digest=sha256 00:28:57.974 06:55:02 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:57.974 06:55:02 -- host/auth.sh@68 -- # keyid=1 00:28:57.974 06:55:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:57.974 06:55:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:57.974 06:55:02 -- common/autotest_common.sh@10 -- # set +x 00:28:57.974 06:55:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:57.974 06:55:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:57.974 06:55:02 -- nvmf/common.sh@717 -- # local ip 00:28:57.974 06:55:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:57.974 06:55:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:57.974 06:55:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.974 06:55:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.974 06:55:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:57.974 06:55:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.974 06:55:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:57.974 06:55:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:57.974 06:55:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:57.974 06:55:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:57.974 06:55:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:57.974 06:55:02 -- common/autotest_common.sh@10 -- # set +x 00:28:58.232 nvme0n1 00:28:58.232 06:55:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:58.232 06:55:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.232 06:55:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:58.232 06:55:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:58.232 06:55:02 -- common/autotest_common.sh@10 -- # set +x 00:28:58.232 06:55:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:58.232 06:55:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.232 06:55:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.232 06:55:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:58.232 06:55:02 -- common/autotest_common.sh@10 -- # set +x 00:28:58.232 06:55:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:58.232 06:55:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:58.232 06:55:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:58.232 06:55:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:58.232 06:55:02 -- host/auth.sh@44 -- # digest=sha256 00:28:58.232 06:55:02 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:58.232 06:55:02 -- host/auth.sh@44 -- # keyid=2 00:28:58.232 06:55:02 -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQxNTMwYzNjZGY2ZmQ5NmRjOGYzNmYxYjY4ZjA3NmYINuWZ: 00:28:58.232 06:55:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:58.232 06:55:02 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:58.232 06:55:02 -- host/auth.sh@49 -- # echo DHHC-1:01:ZjQxNTMwYzNjZGY2ZmQ5NmRjOGYzNmYxYjY4ZjA3NmYINuWZ: 00:28:58.232 06:55:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:28:58.232 06:55:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:58.232 06:55:02 -- host/auth.sh@68 -- # digest=sha256 00:28:58.232 06:55:02 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:58.232 06:55:02 -- host/auth.sh@68 -- # keyid=2 00:28:58.232 06:55:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:58.232 06:55:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:58.232 06:55:02 -- common/autotest_common.sh@10 -- # set +x 00:28:58.232 06:55:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:58.232 06:55:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:58.232 06:55:02 -- nvmf/common.sh@717 -- # local ip 00:28:58.232 06:55:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:58.232 06:55:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:58.232 06:55:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.232 06:55:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.232 06:55:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:58.232 06:55:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.232 06:55:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:58.232 06:55:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:58.232 06:55:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:58.232 06:55:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:58.232 06:55:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:58.232 06:55:02 -- common/autotest_common.sh@10 -- # set +x 00:28:58.490 nvme0n1 00:28:58.490 06:55:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:58.490 06:55:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.490 06:55:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:58.490 06:55:02 -- common/autotest_common.sh@10 -- # set +x 00:28:58.490 06:55:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:58.490 06:55:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:58.490 06:55:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.490 06:55:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.490 06:55:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:58.490 06:55:02 -- common/autotest_common.sh@10 -- # set +x 00:28:58.490 06:55:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:58.490 06:55:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:58.490 06:55:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:58.490 06:55:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:58.490 06:55:02 -- host/auth.sh@44 -- # digest=sha256 00:28:58.490 06:55:02 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:58.490 06:55:02 -- host/auth.sh@44 -- # keyid=3 00:28:58.490 06:55:02 -- host/auth.sh@45 -- # key=DHHC-1:02:ZGNjMGZhNjg3MDFiYTRiN2YxZmZmOGM2MjJkM2M1ZThkYzNiYjBiYzdmOTc5NzNh3+M1UQ==: 00:28:58.490 06:55:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:58.490 06:55:02 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:58.490 06:55:02 -- host/auth.sh@49 -- # echo DHHC-1:02:ZGNjMGZhNjg3MDFiYTRiN2YxZmZmOGM2MjJkM2M1ZThkYzNiYjBiYzdmOTc5NzNh3+M1UQ==: 00:28:58.490 06:55:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:28:58.490 06:55:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:58.490 06:55:02 -- host/auth.sh@68 -- # digest=sha256 00:28:58.490 06:55:02 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:58.490 06:55:02 -- host/auth.sh@68 -- # keyid=3 00:28:58.490 06:55:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:58.490 06:55:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:58.490 06:55:02 -- common/autotest_common.sh@10 -- # set +x 00:28:58.490 06:55:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:58.490 06:55:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:58.490 06:55:02 -- nvmf/common.sh@717 -- # local ip 00:28:58.490 06:55:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:58.490 06:55:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:58.490 06:55:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.490 06:55:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.490 06:55:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:58.490 06:55:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.490 06:55:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:58.490 06:55:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:58.490 06:55:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:58.490 06:55:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:58.490 06:55:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:58.490 06:55:02 -- common/autotest_common.sh@10 -- # set +x 00:28:58.747 nvme0n1 00:28:58.747 06:55:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:58.747 06:55:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.747 06:55:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:58.747 06:55:03 -- common/autotest_common.sh@10 -- # set +x 00:28:58.747 06:55:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:58.747 06:55:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:58.747 06:55:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.747 06:55:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.747 06:55:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:58.747 06:55:03 -- common/autotest_common.sh@10 -- # set +x 00:28:58.747 06:55:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:58.747 06:55:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:58.747 06:55:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:58.747 06:55:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:58.747 06:55:03 -- host/auth.sh@44 -- # digest=sha256 00:28:58.747 06:55:03 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:58.747 06:55:03 -- host/auth.sh@44 -- # keyid=4 00:28:58.747 06:55:03 -- host/auth.sh@45 -- # key=DHHC-1:03:MjMyZmQyNTdlMzY0YmQzNGJkMzY4ZTY2ZmQ0NzQwMTlkZTY1NDY0M2Q0M2EzOWIyM2NmZjljYTZiMDkxN2FiYz3zi2k=: 00:28:58.747 06:55:03 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:58.747 06:55:03 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:58.747 06:55:03 -- host/auth.sh@49 -- # echo DHHC-1:03:MjMyZmQyNTdlMzY0YmQzNGJkMzY4ZTY2ZmQ0NzQwMTlkZTY1NDY0M2Q0M2EzOWIyM2NmZjljYTZiMDkxN2FiYz3zi2k=: 00:28:58.747 06:55:03 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:28:58.747 06:55:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:58.747 06:55:03 -- host/auth.sh@68 -- # digest=sha256 00:28:58.747 06:55:03 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:58.747 06:55:03 -- host/auth.sh@68 -- # keyid=4 00:28:58.747 06:55:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:58.747 06:55:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:58.747 06:55:03 -- common/autotest_common.sh@10 -- # set +x 00:28:58.747 06:55:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:58.747 06:55:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:58.747 06:55:03 -- nvmf/common.sh@717 -- # local ip 00:28:58.747 06:55:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:58.747 06:55:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:58.747 06:55:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.747 06:55:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.747 06:55:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:58.747 06:55:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.747 06:55:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:58.747 06:55:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:58.747 06:55:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:58.747 06:55:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:58.747 06:55:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:58.747 06:55:03 -- common/autotest_common.sh@10 -- # set +x 00:28:59.005 nvme0n1 00:28:59.005 06:55:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:59.005 06:55:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.005 06:55:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:59.005 06:55:03 -- common/autotest_common.sh@10 -- # set +x 00:28:59.005 06:55:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:59.005 06:55:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:59.005 06:55:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.005 06:55:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.005 06:55:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:59.005 06:55:03 -- common/autotest_common.sh@10 -- # set +x 00:28:59.263 06:55:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:59.263 06:55:03 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:59.263 06:55:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:59.263 06:55:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:59.263 06:55:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:59.263 06:55:03 -- host/auth.sh@44 -- # digest=sha256 00:28:59.263 06:55:03 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:59.263 06:55:03 -- host/auth.sh@44 -- # keyid=0 00:28:59.263 06:55:03 -- host/auth.sh@45 -- # key=DHHC-1:00:NWY4YTk0ZTg2MzZmMjYyYzhkOTYwZTY3NjQ3Mjk1YWO8+KB+: 00:28:59.263 06:55:03 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:59.263 06:55:03 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:59.263 06:55:03 -- host/auth.sh@49 -- # echo DHHC-1:00:NWY4YTk0ZTg2MzZmMjYyYzhkOTYwZTY3NjQ3Mjk1YWO8+KB+: 00:28:59.263 06:55:03 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:28:59.263 06:55:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:59.263 06:55:03 -- host/auth.sh@68 -- # digest=sha256 00:28:59.263 06:55:03 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:59.263 06:55:03 -- host/auth.sh@68 -- # keyid=0 00:28:59.263 06:55:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:59.263 06:55:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:59.263 06:55:03 -- common/autotest_common.sh@10 -- # set +x 00:28:59.263 06:55:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:59.263 06:55:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:59.263 06:55:03 -- nvmf/common.sh@717 -- # local ip 00:28:59.263 06:55:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:59.263 06:55:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:59.263 06:55:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.263 06:55:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.263 06:55:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:59.263 06:55:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.263 06:55:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:59.263 06:55:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:59.263 06:55:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:59.263 06:55:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:59.263 06:55:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:59.263 06:55:03 -- common/autotest_common.sh@10 -- # set +x 00:28:59.521 nvme0n1 00:28:59.521 06:55:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:59.521 06:55:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.521 06:55:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:59.521 06:55:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:59.521 06:55:04 -- common/autotest_common.sh@10 -- # set +x 00:28:59.779 06:55:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:59.779 06:55:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.779 06:55:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.779 06:55:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:59.779 06:55:04 -- common/autotest_common.sh@10 -- # set +x 00:28:59.779 06:55:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:59.779 06:55:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:59.779 06:55:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:59.779 06:55:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:59.779 06:55:04 -- host/auth.sh@44 -- # digest=sha256 00:28:59.779 06:55:04 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:59.779 06:55:04 -- host/auth.sh@44 -- # keyid=1 00:28:59.779 06:55:04 -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:28:59.779 06:55:04 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:59.779 06:55:04 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:59.779 06:55:04 -- host/auth.sh@49 -- # echo DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:28:59.779 06:55:04 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:28:59.779 06:55:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:59.779 06:55:04 -- host/auth.sh@68 -- # digest=sha256 00:28:59.779 06:55:04 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:59.779 06:55:04 -- host/auth.sh@68 -- # keyid=1 00:28:59.779 06:55:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:59.779 06:55:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:59.779 06:55:04 -- common/autotest_common.sh@10 -- # set +x 00:28:59.779 06:55:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:59.779 06:55:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:59.779 06:55:04 -- nvmf/common.sh@717 -- # local ip 00:28:59.779 06:55:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:59.779 06:55:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:59.779 06:55:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.779 06:55:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.779 06:55:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:59.779 06:55:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.779 06:55:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:59.779 06:55:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:59.779 06:55:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:59.779 06:55:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:59.779 06:55:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:59.779 06:55:04 -- common/autotest_common.sh@10 -- # set +x 00:29:00.037 nvme0n1 00:29:00.037 06:55:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:00.037 06:55:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.037 06:55:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:00.037 06:55:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:00.037 06:55:04 -- common/autotest_common.sh@10 -- # set +x 00:29:00.294 06:55:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:00.294 06:55:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.294 06:55:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.294 06:55:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:00.294 06:55:04 -- common/autotest_common.sh@10 -- # set +x 00:29:00.294 06:55:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:00.294 06:55:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:00.294 06:55:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:29:00.294 06:55:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:00.294 06:55:04 -- host/auth.sh@44 -- # digest=sha256 00:29:00.294 06:55:04 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:00.294 06:55:04 -- host/auth.sh@44 -- # keyid=2 00:29:00.294 06:55:04 -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQxNTMwYzNjZGY2ZmQ5NmRjOGYzNmYxYjY4ZjA3NmYINuWZ: 00:29:00.294 06:55:04 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:00.294 06:55:04 -- host/auth.sh@48 -- # echo ffdhe6144 00:29:00.295 06:55:04 -- host/auth.sh@49 -- # echo DHHC-1:01:ZjQxNTMwYzNjZGY2ZmQ5NmRjOGYzNmYxYjY4ZjA3NmYINuWZ: 00:29:00.295 06:55:04 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:29:00.295 06:55:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:00.295 06:55:04 -- host/auth.sh@68 -- # digest=sha256 00:29:00.295 06:55:04 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:29:00.295 06:55:04 -- host/auth.sh@68 -- # keyid=2 00:29:00.295 06:55:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:00.295 06:55:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:00.295 06:55:04 -- common/autotest_common.sh@10 -- # set +x 00:29:00.295 06:55:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:00.295 06:55:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:00.295 06:55:04 -- nvmf/common.sh@717 -- # local ip 00:29:00.295 06:55:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:00.295 06:55:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:00.295 06:55:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.295 06:55:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.295 06:55:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:00.295 06:55:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.295 06:55:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:00.295 06:55:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:00.295 06:55:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:00.295 06:55:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:00.295 06:55:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:00.295 06:55:04 -- common/autotest_common.sh@10 -- # set +x 00:29:00.859 nvme0n1 00:29:00.859 06:55:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:00.859 06:55:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.859 06:55:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:00.859 06:55:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:00.859 06:55:05 -- common/autotest_common.sh@10 -- # set +x 00:29:00.859 06:55:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:00.859 06:55:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.859 06:55:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.860 06:55:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:00.860 06:55:05 -- common/autotest_common.sh@10 -- # set +x 00:29:00.860 06:55:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:00.860 06:55:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:00.860 06:55:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:29:00.860 06:55:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:00.860 06:55:05 -- host/auth.sh@44 -- # digest=sha256 00:29:00.860 06:55:05 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:00.860 06:55:05 -- host/auth.sh@44 -- # keyid=3 00:29:00.860 06:55:05 -- host/auth.sh@45 -- # key=DHHC-1:02:ZGNjMGZhNjg3MDFiYTRiN2YxZmZmOGM2MjJkM2M1ZThkYzNiYjBiYzdmOTc5NzNh3+M1UQ==: 00:29:00.860 06:55:05 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:00.860 06:55:05 -- host/auth.sh@48 -- # echo ffdhe6144 00:29:00.860 06:55:05 -- host/auth.sh@49 -- # echo DHHC-1:02:ZGNjMGZhNjg3MDFiYTRiN2YxZmZmOGM2MjJkM2M1ZThkYzNiYjBiYzdmOTc5NzNh3+M1UQ==: 00:29:00.860 06:55:05 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:29:00.860 06:55:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:00.860 06:55:05 -- host/auth.sh@68 -- # digest=sha256 00:29:00.860 06:55:05 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:29:00.860 06:55:05 -- host/auth.sh@68 -- # keyid=3 00:29:00.860 06:55:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:00.860 06:55:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:00.860 06:55:05 -- common/autotest_common.sh@10 -- # set +x 00:29:00.860 06:55:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:00.860 06:55:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:00.860 06:55:05 -- nvmf/common.sh@717 -- # local ip 00:29:00.860 06:55:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:00.860 06:55:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:00.860 06:55:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.860 06:55:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.860 06:55:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:00.860 06:55:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.860 06:55:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:00.860 06:55:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:00.860 06:55:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:00.860 06:55:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:29:00.860 06:55:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:00.860 06:55:05 -- common/autotest_common.sh@10 -- # set +x 00:29:01.425 nvme0n1 00:29:01.425 06:55:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:01.425 06:55:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.425 06:55:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:01.425 06:55:05 -- common/autotest_common.sh@10 -- # set +x 00:29:01.425 06:55:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:01.425 06:55:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:01.425 06:55:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.425 06:55:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:01.425 06:55:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:01.425 06:55:05 -- common/autotest_common.sh@10 -- # set +x 00:29:01.425 06:55:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:01.425 06:55:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:01.425 06:55:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:29:01.425 06:55:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:01.425 06:55:05 -- host/auth.sh@44 -- # digest=sha256 00:29:01.425 06:55:05 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:01.425 06:55:05 -- host/auth.sh@44 -- # keyid=4 00:29:01.425 06:55:05 -- host/auth.sh@45 -- # key=DHHC-1:03:MjMyZmQyNTdlMzY0YmQzNGJkMzY4ZTY2ZmQ0NzQwMTlkZTY1NDY0M2Q0M2EzOWIyM2NmZjljYTZiMDkxN2FiYz3zi2k=: 00:29:01.425 06:55:05 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:01.425 06:55:05 -- host/auth.sh@48 -- # echo ffdhe6144 00:29:01.425 06:55:05 -- host/auth.sh@49 -- # echo DHHC-1:03:MjMyZmQyNTdlMzY0YmQzNGJkMzY4ZTY2ZmQ0NzQwMTlkZTY1NDY0M2Q0M2EzOWIyM2NmZjljYTZiMDkxN2FiYz3zi2k=: 00:29:01.425 06:55:05 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:29:01.425 06:55:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:01.425 06:55:05 -- host/auth.sh@68 -- # digest=sha256 00:29:01.425 06:55:05 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:29:01.425 06:55:05 -- host/auth.sh@68 -- # keyid=4 00:29:01.425 06:55:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:01.425 06:55:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:01.425 06:55:05 -- common/autotest_common.sh@10 -- # set +x 00:29:01.425 06:55:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:01.425 06:55:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:01.425 06:55:05 -- nvmf/common.sh@717 -- # local ip 00:29:01.425 06:55:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:01.425 06:55:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:01.425 06:55:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.425 06:55:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.425 06:55:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:01.425 06:55:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.425 06:55:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:01.425 06:55:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:01.425 06:55:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:01.425 06:55:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:01.425 06:55:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:01.425 06:55:05 -- common/autotest_common.sh@10 -- # set +x 00:29:01.990 nvme0n1 00:29:01.990 06:55:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:01.990 06:55:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.990 06:55:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:01.990 06:55:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:01.990 06:55:06 -- common/autotest_common.sh@10 -- # set +x 00:29:01.990 06:55:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:01.990 06:55:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.990 06:55:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:01.990 06:55:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:01.990 06:55:06 -- common/autotest_common.sh@10 -- # set +x 00:29:01.990 06:55:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:01.990 06:55:06 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:29:01.990 06:55:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:01.990 06:55:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:29:01.990 06:55:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:01.990 06:55:06 -- host/auth.sh@44 -- # digest=sha256 00:29:01.990 06:55:06 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:01.990 06:55:06 -- host/auth.sh@44 -- # keyid=0 00:29:01.990 06:55:06 -- host/auth.sh@45 -- # key=DHHC-1:00:NWY4YTk0ZTg2MzZmMjYyYzhkOTYwZTY3NjQ3Mjk1YWO8+KB+: 00:29:01.990 06:55:06 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:01.990 06:55:06 -- host/auth.sh@48 -- # echo ffdhe8192 00:29:01.990 06:55:06 -- host/auth.sh@49 -- # echo DHHC-1:00:NWY4YTk0ZTg2MzZmMjYyYzhkOTYwZTY3NjQ3Mjk1YWO8+KB+: 00:29:01.990 06:55:06 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:29:01.990 06:55:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:01.990 06:55:06 -- host/auth.sh@68 -- # digest=sha256 00:29:01.990 06:55:06 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:29:01.990 06:55:06 -- host/auth.sh@68 -- # keyid=0 00:29:01.990 06:55:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:01.990 06:55:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:01.990 06:55:06 -- common/autotest_common.sh@10 -- # set +x 00:29:01.990 06:55:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:01.990 06:55:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:01.990 06:55:06 -- nvmf/common.sh@717 -- # local ip 00:29:01.990 06:55:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:01.990 06:55:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:01.990 06:55:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.990 06:55:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.990 06:55:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:01.990 06:55:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.990 06:55:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:01.990 06:55:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:01.990 06:55:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:01.990 06:55:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:29:01.990 06:55:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:01.990 06:55:06 -- common/autotest_common.sh@10 -- # set +x 00:29:02.922 nvme0n1 00:29:02.922 06:55:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:02.922 06:55:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.922 06:55:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:02.922 06:55:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:02.922 06:55:07 -- common/autotest_common.sh@10 -- # set +x 00:29:02.922 06:55:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:02.922 06:55:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.922 06:55:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:02.922 06:55:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:02.922 06:55:07 -- common/autotest_common.sh@10 -- # set +x 00:29:02.922 06:55:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:02.922 06:55:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:02.922 06:55:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:29:02.922 06:55:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:02.922 06:55:07 -- host/auth.sh@44 -- # digest=sha256 00:29:02.922 06:55:07 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:02.922 06:55:07 -- host/auth.sh@44 -- # keyid=1 00:29:02.922 06:55:07 -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:29:02.922 06:55:07 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:02.922 06:55:07 -- host/auth.sh@48 -- # echo ffdhe8192 00:29:02.922 06:55:07 -- host/auth.sh@49 -- # echo DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:29:02.922 06:55:07 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:29:02.922 06:55:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:02.922 06:55:07 -- host/auth.sh@68 -- # digest=sha256 00:29:02.922 06:55:07 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:29:02.922 06:55:07 -- host/auth.sh@68 -- # keyid=1 00:29:02.922 06:55:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:02.922 06:55:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:02.922 06:55:07 -- common/autotest_common.sh@10 -- # set +x 00:29:02.922 06:55:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:02.922 06:55:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:02.922 06:55:07 -- nvmf/common.sh@717 -- # local ip 00:29:02.922 06:55:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:02.922 06:55:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:02.922 06:55:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:02.922 06:55:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:02.922 06:55:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:02.922 06:55:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:02.922 06:55:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:02.922 06:55:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:02.922 06:55:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:02.922 06:55:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:29:02.922 06:55:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:02.922 06:55:07 -- common/autotest_common.sh@10 -- # set +x 00:29:03.856 nvme0n1 00:29:03.856 06:55:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:03.856 06:55:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.856 06:55:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:03.856 06:55:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:03.856 06:55:08 -- common/autotest_common.sh@10 -- # set +x 00:29:03.856 06:55:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:03.856 06:55:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.856 06:55:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.856 06:55:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:03.856 06:55:08 -- common/autotest_common.sh@10 -- # set +x 00:29:03.856 06:55:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:03.856 06:55:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:03.856 06:55:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:29:03.856 06:55:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:03.856 06:55:08 -- host/auth.sh@44 -- # digest=sha256 00:29:03.856 06:55:08 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:03.856 06:55:08 -- host/auth.sh@44 -- # keyid=2 00:29:03.856 06:55:08 -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQxNTMwYzNjZGY2ZmQ5NmRjOGYzNmYxYjY4ZjA3NmYINuWZ: 00:29:03.856 06:55:08 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:03.856 06:55:08 -- host/auth.sh@48 -- # echo ffdhe8192 00:29:03.856 06:55:08 -- host/auth.sh@49 -- # echo DHHC-1:01:ZjQxNTMwYzNjZGY2ZmQ5NmRjOGYzNmYxYjY4ZjA3NmYINuWZ: 00:29:03.856 06:55:08 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:29:03.856 06:55:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:03.856 06:55:08 -- host/auth.sh@68 -- # digest=sha256 00:29:03.856 06:55:08 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:29:03.856 06:55:08 -- host/auth.sh@68 -- # keyid=2 00:29:03.856 06:55:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:03.856 06:55:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:03.856 06:55:08 -- common/autotest_common.sh@10 -- # set +x 00:29:03.856 06:55:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:03.856 06:55:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:03.856 06:55:08 -- nvmf/common.sh@717 -- # local ip 00:29:03.856 06:55:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:03.856 06:55:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:03.856 06:55:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.856 06:55:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.856 06:55:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:03.856 06:55:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.856 06:55:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:03.856 06:55:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:03.856 06:55:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:03.856 06:55:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:03.856 06:55:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:03.856 06:55:08 -- common/autotest_common.sh@10 -- # set +x 00:29:04.790 nvme0n1 00:29:04.790 06:55:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:04.790 06:55:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.790 06:55:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:04.790 06:55:09 -- common/autotest_common.sh@10 -- # set +x 00:29:04.790 06:55:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:04.790 06:55:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:04.790 06:55:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.790 06:55:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.790 06:55:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:04.790 06:55:09 -- common/autotest_common.sh@10 -- # set +x 00:29:04.790 06:55:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:04.790 06:55:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:04.790 06:55:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:29:04.790 06:55:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:04.790 06:55:09 -- host/auth.sh@44 -- # digest=sha256 00:29:04.790 06:55:09 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:04.790 06:55:09 -- host/auth.sh@44 -- # keyid=3 00:29:04.790 06:55:09 -- host/auth.sh@45 -- # key=DHHC-1:02:ZGNjMGZhNjg3MDFiYTRiN2YxZmZmOGM2MjJkM2M1ZThkYzNiYjBiYzdmOTc5NzNh3+M1UQ==: 00:29:04.790 06:55:09 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:04.790 06:55:09 -- host/auth.sh@48 -- # echo ffdhe8192 00:29:04.790 06:55:09 -- host/auth.sh@49 -- # echo DHHC-1:02:ZGNjMGZhNjg3MDFiYTRiN2YxZmZmOGM2MjJkM2M1ZThkYzNiYjBiYzdmOTc5NzNh3+M1UQ==: 00:29:04.790 06:55:09 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:29:04.790 06:55:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:04.790 06:55:09 -- host/auth.sh@68 -- # digest=sha256 00:29:04.790 06:55:09 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:29:04.790 06:55:09 -- host/auth.sh@68 -- # keyid=3 00:29:04.790 06:55:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:04.790 06:55:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:04.790 06:55:09 -- common/autotest_common.sh@10 -- # set +x 00:29:04.790 06:55:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:04.790 06:55:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:04.790 06:55:09 -- nvmf/common.sh@717 -- # local ip 00:29:04.790 06:55:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:04.790 06:55:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:04.790 06:55:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.790 06:55:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.790 06:55:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:04.790 06:55:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.790 06:55:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:04.790 06:55:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:04.790 06:55:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:04.790 06:55:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:29:04.790 06:55:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:04.790 06:55:09 -- common/autotest_common.sh@10 -- # set +x 00:29:05.755 nvme0n1 00:29:05.755 06:55:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:05.755 06:55:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.755 06:55:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:05.755 06:55:10 -- common/autotest_common.sh@10 -- # set +x 00:29:05.755 06:55:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:05.755 06:55:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:05.755 06:55:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.755 06:55:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.755 06:55:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:05.755 06:55:10 -- common/autotest_common.sh@10 -- # set +x 00:29:05.755 06:55:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:05.755 06:55:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:05.755 06:55:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:29:05.755 06:55:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:05.755 06:55:10 -- host/auth.sh@44 -- # digest=sha256 00:29:05.755 06:55:10 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:05.755 06:55:10 -- host/auth.sh@44 -- # keyid=4 00:29:05.755 06:55:10 -- host/auth.sh@45 -- # key=DHHC-1:03:MjMyZmQyNTdlMzY0YmQzNGJkMzY4ZTY2ZmQ0NzQwMTlkZTY1NDY0M2Q0M2EzOWIyM2NmZjljYTZiMDkxN2FiYz3zi2k=: 00:29:05.755 06:55:10 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:05.755 06:55:10 -- host/auth.sh@48 -- # echo ffdhe8192 00:29:05.755 06:55:10 -- host/auth.sh@49 -- # echo DHHC-1:03:MjMyZmQyNTdlMzY0YmQzNGJkMzY4ZTY2ZmQ0NzQwMTlkZTY1NDY0M2Q0M2EzOWIyM2NmZjljYTZiMDkxN2FiYz3zi2k=: 00:29:05.755 06:55:10 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:29:05.755 06:55:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:05.755 06:55:10 -- host/auth.sh@68 -- # digest=sha256 00:29:05.755 06:55:10 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:29:05.755 06:55:10 -- host/auth.sh@68 -- # keyid=4 00:29:05.755 06:55:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:05.755 06:55:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:05.755 06:55:10 -- common/autotest_common.sh@10 -- # set +x 00:29:05.755 06:55:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:05.755 06:55:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:05.755 06:55:10 -- nvmf/common.sh@717 -- # local ip 00:29:05.755 06:55:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:05.755 06:55:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:05.755 06:55:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.755 06:55:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.755 06:55:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:05.755 06:55:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.755 06:55:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:05.755 06:55:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:05.755 06:55:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:05.755 06:55:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:05.755 06:55:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:05.755 06:55:10 -- common/autotest_common.sh@10 -- # set +x 00:29:06.689 nvme0n1 00:29:06.689 06:55:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:06.689 06:55:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:06.689 06:55:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:06.689 06:55:11 -- common/autotest_common.sh@10 -- # set +x 00:29:06.689 06:55:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:06.689 06:55:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:06.689 06:55:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:06.689 06:55:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.689 06:55:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:06.689 06:55:11 -- common/autotest_common.sh@10 -- # set +x 00:29:06.689 06:55:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:06.689 06:55:11 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:29:06.689 06:55:11 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:29:06.689 06:55:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:06.689 06:55:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:29:06.689 06:55:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:06.689 06:55:11 -- host/auth.sh@44 -- # digest=sha384 00:29:06.689 06:55:11 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:06.689 06:55:11 -- host/auth.sh@44 -- # keyid=0 00:29:06.689 06:55:11 -- host/auth.sh@45 -- # key=DHHC-1:00:NWY4YTk0ZTg2MzZmMjYyYzhkOTYwZTY3NjQ3Mjk1YWO8+KB+: 00:29:06.689 06:55:11 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:06.689 06:55:11 -- host/auth.sh@48 -- # echo ffdhe2048 00:29:06.689 06:55:11 -- host/auth.sh@49 -- # echo DHHC-1:00:NWY4YTk0ZTg2MzZmMjYyYzhkOTYwZTY3NjQ3Mjk1YWO8+KB+: 00:29:06.689 06:55:11 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:29:06.689 06:55:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:06.689 06:55:11 -- host/auth.sh@68 -- # digest=sha384 00:29:06.689 06:55:11 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:29:06.689 06:55:11 -- host/auth.sh@68 -- # keyid=0 00:29:06.689 06:55:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:06.689 06:55:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:06.689 06:55:11 -- common/autotest_common.sh@10 -- # set +x 00:29:06.689 06:55:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:06.689 06:55:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:06.689 06:55:11 -- nvmf/common.sh@717 -- # local ip 00:29:06.689 06:55:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:06.689 06:55:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:06.689 06:55:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.689 06:55:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.690 06:55:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:06.690 06:55:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.690 06:55:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:06.690 06:55:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:06.690 06:55:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:06.690 06:55:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:29:06.690 06:55:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:06.690 06:55:11 -- common/autotest_common.sh@10 -- # set +x 00:29:06.948 nvme0n1 00:29:06.948 06:55:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:06.948 06:55:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:06.948 06:55:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:06.948 06:55:11 -- common/autotest_common.sh@10 -- # set +x 00:29:06.948 06:55:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:06.948 06:55:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:06.948 06:55:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:06.948 06:55:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.948 06:55:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:06.948 06:55:11 -- common/autotest_common.sh@10 -- # set +x 00:29:06.948 06:55:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:06.948 06:55:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:06.948 06:55:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:29:06.948 06:55:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:06.948 06:55:11 -- host/auth.sh@44 -- # digest=sha384 00:29:06.948 06:55:11 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:06.948 06:55:11 -- host/auth.sh@44 -- # keyid=1 00:29:06.948 06:55:11 -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:29:06.948 06:55:11 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:06.948 06:55:11 -- host/auth.sh@48 -- # echo ffdhe2048 00:29:06.948 06:55:11 -- host/auth.sh@49 -- # echo DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:29:06.948 06:55:11 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:29:06.948 06:55:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:06.948 06:55:11 -- host/auth.sh@68 -- # digest=sha384 00:29:06.948 06:55:11 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:29:06.948 06:55:11 -- host/auth.sh@68 -- # keyid=1 00:29:06.948 06:55:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:06.948 06:55:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:06.948 06:55:11 -- common/autotest_common.sh@10 -- # set +x 00:29:06.948 06:55:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:06.948 06:55:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:06.948 06:55:11 -- nvmf/common.sh@717 -- # local ip 00:29:06.948 06:55:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:06.948 06:55:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:06.948 06:55:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.948 06:55:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.948 06:55:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:06.948 06:55:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.948 06:55:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:06.948 06:55:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:06.948 06:55:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:06.948 06:55:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:29:06.948 06:55:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:06.948 06:55:11 -- common/autotest_common.sh@10 -- # set +x 00:29:07.206 nvme0n1 00:29:07.206 06:55:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:07.206 06:55:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.206 06:55:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:07.206 06:55:11 -- common/autotest_common.sh@10 -- # set +x 00:29:07.206 06:55:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:07.206 06:55:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:07.206 06:55:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.206 06:55:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.206 06:55:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:07.206 06:55:11 -- common/autotest_common.sh@10 -- # set +x 00:29:07.206 06:55:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:07.206 06:55:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:07.206 06:55:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:29:07.206 06:55:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:07.206 06:55:11 -- host/auth.sh@44 -- # digest=sha384 00:29:07.206 06:55:11 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:07.206 06:55:11 -- host/auth.sh@44 -- # keyid=2 00:29:07.206 06:55:11 -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQxNTMwYzNjZGY2ZmQ5NmRjOGYzNmYxYjY4ZjA3NmYINuWZ: 00:29:07.206 06:55:11 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:07.206 06:55:11 -- host/auth.sh@48 -- # echo ffdhe2048 00:29:07.206 06:55:11 -- host/auth.sh@49 -- # echo DHHC-1:01:ZjQxNTMwYzNjZGY2ZmQ5NmRjOGYzNmYxYjY4ZjA3NmYINuWZ: 00:29:07.206 06:55:11 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:29:07.206 06:55:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:07.206 06:55:11 -- host/auth.sh@68 -- # digest=sha384 00:29:07.206 06:55:11 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:29:07.206 06:55:11 -- host/auth.sh@68 -- # keyid=2 00:29:07.206 06:55:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:07.206 06:55:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:07.206 06:55:11 -- common/autotest_common.sh@10 -- # set +x 00:29:07.206 06:55:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:07.206 06:55:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:07.206 06:55:11 -- nvmf/common.sh@717 -- # local ip 00:29:07.206 06:55:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:07.206 06:55:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:07.206 06:55:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.206 06:55:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.206 06:55:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:07.206 06:55:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.206 06:55:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:07.206 06:55:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:07.206 06:55:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:07.206 06:55:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:07.206 06:55:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:07.206 06:55:11 -- common/autotest_common.sh@10 -- # set +x 00:29:07.206 nvme0n1 00:29:07.206 06:55:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:07.206 06:55:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.206 06:55:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:07.206 06:55:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:07.206 06:55:11 -- common/autotest_common.sh@10 -- # set +x 00:29:07.206 06:55:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:07.465 06:55:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.465 06:55:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.465 06:55:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:07.465 06:55:11 -- common/autotest_common.sh@10 -- # set +x 00:29:07.465 06:55:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:07.465 06:55:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:07.465 06:55:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:29:07.465 06:55:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:07.465 06:55:11 -- host/auth.sh@44 -- # digest=sha384 00:29:07.465 06:55:11 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:07.465 06:55:11 -- host/auth.sh@44 -- # keyid=3 00:29:07.465 06:55:11 -- host/auth.sh@45 -- # key=DHHC-1:02:ZGNjMGZhNjg3MDFiYTRiN2YxZmZmOGM2MjJkM2M1ZThkYzNiYjBiYzdmOTc5NzNh3+M1UQ==: 00:29:07.465 06:55:11 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:07.465 06:55:11 -- host/auth.sh@48 -- # echo ffdhe2048 00:29:07.465 06:55:11 -- host/auth.sh@49 -- # echo DHHC-1:02:ZGNjMGZhNjg3MDFiYTRiN2YxZmZmOGM2MjJkM2M1ZThkYzNiYjBiYzdmOTc5NzNh3+M1UQ==: 00:29:07.465 06:55:11 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:29:07.465 06:55:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:07.465 06:55:11 -- host/auth.sh@68 -- # digest=sha384 00:29:07.465 06:55:11 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:29:07.465 06:55:11 -- host/auth.sh@68 -- # keyid=3 00:29:07.465 06:55:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:07.465 06:55:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:07.465 06:55:11 -- common/autotest_common.sh@10 -- # set +x 00:29:07.465 06:55:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:07.465 06:55:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:07.465 06:55:11 -- nvmf/common.sh@717 -- # local ip 00:29:07.465 06:55:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:07.465 06:55:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:07.465 06:55:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.465 06:55:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.465 06:55:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:07.465 06:55:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.465 06:55:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:07.465 06:55:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:07.465 06:55:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:07.465 06:55:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:29:07.465 06:55:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:07.465 06:55:11 -- common/autotest_common.sh@10 -- # set +x 00:29:07.465 nvme0n1 00:29:07.465 06:55:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:07.465 06:55:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.465 06:55:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:07.465 06:55:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:07.465 06:55:11 -- common/autotest_common.sh@10 -- # set +x 00:29:07.465 06:55:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:07.465 06:55:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.465 06:55:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.465 06:55:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:07.465 06:55:12 -- common/autotest_common.sh@10 -- # set +x 00:29:07.465 06:55:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:07.465 06:55:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:07.465 06:55:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:29:07.465 06:55:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:07.465 06:55:12 -- host/auth.sh@44 -- # digest=sha384 00:29:07.465 06:55:12 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:07.465 06:55:12 -- host/auth.sh@44 -- # keyid=4 00:29:07.465 06:55:12 -- host/auth.sh@45 -- # key=DHHC-1:03:MjMyZmQyNTdlMzY0YmQzNGJkMzY4ZTY2ZmQ0NzQwMTlkZTY1NDY0M2Q0M2EzOWIyM2NmZjljYTZiMDkxN2FiYz3zi2k=: 00:29:07.465 06:55:12 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:07.465 06:55:12 -- host/auth.sh@48 -- # echo ffdhe2048 00:29:07.465 06:55:12 -- host/auth.sh@49 -- # echo DHHC-1:03:MjMyZmQyNTdlMzY0YmQzNGJkMzY4ZTY2ZmQ0NzQwMTlkZTY1NDY0M2Q0M2EzOWIyM2NmZjljYTZiMDkxN2FiYz3zi2k=: 00:29:07.465 06:55:12 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:29:07.465 06:55:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:07.465 06:55:12 -- host/auth.sh@68 -- # digest=sha384 00:29:07.465 06:55:12 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:29:07.465 06:55:12 -- host/auth.sh@68 -- # keyid=4 00:29:07.465 06:55:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:07.465 06:55:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:07.465 06:55:12 -- common/autotest_common.sh@10 -- # set +x 00:29:07.465 06:55:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:07.465 06:55:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:07.465 06:55:12 -- nvmf/common.sh@717 -- # local ip 00:29:07.465 06:55:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:07.465 06:55:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:07.465 06:55:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.465 06:55:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.465 06:55:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:07.465 06:55:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.465 06:55:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:07.465 06:55:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:07.465 06:55:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:07.465 06:55:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:07.465 06:55:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:07.465 06:55:12 -- common/autotest_common.sh@10 -- # set +x 00:29:07.724 nvme0n1 00:29:07.724 06:55:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:07.724 06:55:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.724 06:55:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:07.724 06:55:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:07.724 06:55:12 -- common/autotest_common.sh@10 -- # set +x 00:29:07.724 06:55:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:07.724 06:55:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.724 06:55:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.724 06:55:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:07.724 06:55:12 -- common/autotest_common.sh@10 -- # set +x 00:29:07.724 06:55:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:07.724 06:55:12 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:29:07.724 06:55:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:07.724 06:55:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:29:07.724 06:55:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:07.724 06:55:12 -- host/auth.sh@44 -- # digest=sha384 00:29:07.724 06:55:12 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:07.724 06:55:12 -- host/auth.sh@44 -- # keyid=0 00:29:07.724 06:55:12 -- host/auth.sh@45 -- # key=DHHC-1:00:NWY4YTk0ZTg2MzZmMjYyYzhkOTYwZTY3NjQ3Mjk1YWO8+KB+: 00:29:07.724 06:55:12 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:07.724 06:55:12 -- host/auth.sh@48 -- # echo ffdhe3072 00:29:07.724 06:55:12 -- host/auth.sh@49 -- # echo DHHC-1:00:NWY4YTk0ZTg2MzZmMjYyYzhkOTYwZTY3NjQ3Mjk1YWO8+KB+: 00:29:07.724 06:55:12 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:29:07.724 06:55:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:07.724 06:55:12 -- host/auth.sh@68 -- # digest=sha384 00:29:07.724 06:55:12 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:29:07.724 06:55:12 -- host/auth.sh@68 -- # keyid=0 00:29:07.724 06:55:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:07.724 06:55:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:07.724 06:55:12 -- common/autotest_common.sh@10 -- # set +x 00:29:07.724 06:55:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:07.724 06:55:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:07.724 06:55:12 -- nvmf/common.sh@717 -- # local ip 00:29:07.724 06:55:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:07.724 06:55:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:07.724 06:55:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.724 06:55:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.724 06:55:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:07.724 06:55:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.724 06:55:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:07.724 06:55:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:07.724 06:55:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:07.724 06:55:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:29:07.724 06:55:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:07.724 06:55:12 -- common/autotest_common.sh@10 -- # set +x 00:29:07.983 nvme0n1 00:29:07.983 06:55:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:07.983 06:55:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.983 06:55:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:07.983 06:55:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:07.983 06:55:12 -- common/autotest_common.sh@10 -- # set +x 00:29:07.983 06:55:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:07.983 06:55:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.983 06:55:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.983 06:55:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:07.983 06:55:12 -- common/autotest_common.sh@10 -- # set +x 00:29:07.983 06:55:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:07.983 06:55:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:07.983 06:55:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:29:07.983 06:55:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:07.983 06:55:12 -- host/auth.sh@44 -- # digest=sha384 00:29:07.983 06:55:12 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:07.983 06:55:12 -- host/auth.sh@44 -- # keyid=1 00:29:07.983 06:55:12 -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:29:07.983 06:55:12 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:07.983 06:55:12 -- host/auth.sh@48 -- # echo ffdhe3072 00:29:07.983 06:55:12 -- host/auth.sh@49 -- # echo DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:29:07.983 06:55:12 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:29:07.983 06:55:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:07.983 06:55:12 -- host/auth.sh@68 -- # digest=sha384 00:29:07.983 06:55:12 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:29:07.983 06:55:12 -- host/auth.sh@68 -- # keyid=1 00:29:07.983 06:55:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:07.983 06:55:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:07.983 06:55:12 -- common/autotest_common.sh@10 -- # set +x 00:29:07.983 06:55:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:07.983 06:55:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:07.983 06:55:12 -- nvmf/common.sh@717 -- # local ip 00:29:07.983 06:55:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:07.983 06:55:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:07.983 06:55:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.983 06:55:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.983 06:55:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:07.983 06:55:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.983 06:55:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:07.983 06:55:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:07.983 06:55:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:07.983 06:55:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:29:07.983 06:55:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:07.983 06:55:12 -- common/autotest_common.sh@10 -- # set +x 00:29:07.983 nvme0n1 00:29:07.983 06:55:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:07.983 06:55:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.983 06:55:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:07.983 06:55:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:07.983 06:55:12 -- common/autotest_common.sh@10 -- # set +x 00:29:08.241 06:55:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:08.241 06:55:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.241 06:55:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.242 06:55:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:08.242 06:55:12 -- common/autotest_common.sh@10 -- # set +x 00:29:08.242 06:55:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:08.242 06:55:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:08.242 06:55:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:29:08.242 06:55:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:08.242 06:55:12 -- host/auth.sh@44 -- # digest=sha384 00:29:08.242 06:55:12 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:08.242 06:55:12 -- host/auth.sh@44 -- # keyid=2 00:29:08.242 06:55:12 -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQxNTMwYzNjZGY2ZmQ5NmRjOGYzNmYxYjY4ZjA3NmYINuWZ: 00:29:08.242 06:55:12 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:08.242 06:55:12 -- host/auth.sh@48 -- # echo ffdhe3072 00:29:08.242 06:55:12 -- host/auth.sh@49 -- # echo DHHC-1:01:ZjQxNTMwYzNjZGY2ZmQ5NmRjOGYzNmYxYjY4ZjA3NmYINuWZ: 00:29:08.242 06:55:12 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:29:08.242 06:55:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:08.242 06:55:12 -- host/auth.sh@68 -- # digest=sha384 00:29:08.242 06:55:12 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:29:08.242 06:55:12 -- host/auth.sh@68 -- # keyid=2 00:29:08.242 06:55:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:08.242 06:55:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:08.242 06:55:12 -- common/autotest_common.sh@10 -- # set +x 00:29:08.242 06:55:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:08.242 06:55:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:08.242 06:55:12 -- nvmf/common.sh@717 -- # local ip 00:29:08.242 06:55:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:08.242 06:55:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:08.242 06:55:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.242 06:55:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.242 06:55:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:08.242 06:55:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.242 06:55:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:08.242 06:55:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:08.242 06:55:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:08.242 06:55:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:08.242 06:55:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:08.242 06:55:12 -- common/autotest_common.sh@10 -- # set +x 00:29:08.242 nvme0n1 00:29:08.242 06:55:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:08.242 06:55:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.242 06:55:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:08.242 06:55:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:08.242 06:55:12 -- common/autotest_common.sh@10 -- # set +x 00:29:08.242 06:55:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:08.242 06:55:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.242 06:55:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.242 06:55:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:08.242 06:55:12 -- common/autotest_common.sh@10 -- # set +x 00:29:08.500 06:55:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:08.500 06:55:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:08.500 06:55:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:29:08.500 06:55:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:08.500 06:55:12 -- host/auth.sh@44 -- # digest=sha384 00:29:08.500 06:55:12 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:08.500 06:55:12 -- host/auth.sh@44 -- # keyid=3 00:29:08.500 06:55:12 -- host/auth.sh@45 -- # key=DHHC-1:02:ZGNjMGZhNjg3MDFiYTRiN2YxZmZmOGM2MjJkM2M1ZThkYzNiYjBiYzdmOTc5NzNh3+M1UQ==: 00:29:08.500 06:55:12 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:08.500 06:55:12 -- host/auth.sh@48 -- # echo ffdhe3072 00:29:08.500 06:55:12 -- host/auth.sh@49 -- # echo DHHC-1:02:ZGNjMGZhNjg3MDFiYTRiN2YxZmZmOGM2MjJkM2M1ZThkYzNiYjBiYzdmOTc5NzNh3+M1UQ==: 00:29:08.500 06:55:12 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:29:08.500 06:55:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:08.500 06:55:12 -- host/auth.sh@68 -- # digest=sha384 00:29:08.500 06:55:12 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:29:08.500 06:55:12 -- host/auth.sh@68 -- # keyid=3 00:29:08.500 06:55:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:08.500 06:55:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:08.500 06:55:12 -- common/autotest_common.sh@10 -- # set +x 00:29:08.500 06:55:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:08.500 06:55:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:08.500 06:55:12 -- nvmf/common.sh@717 -- # local ip 00:29:08.500 06:55:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:08.500 06:55:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:08.500 06:55:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.500 06:55:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.500 06:55:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:08.500 06:55:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.500 06:55:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:08.500 06:55:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:08.500 06:55:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:08.500 06:55:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:29:08.500 06:55:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:08.500 06:55:12 -- common/autotest_common.sh@10 -- # set +x 00:29:08.500 nvme0n1 00:29:08.501 06:55:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:08.501 06:55:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.501 06:55:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:08.501 06:55:13 -- common/autotest_common.sh@10 -- # set +x 00:29:08.501 06:55:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:08.501 06:55:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:08.501 06:55:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.501 06:55:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.501 06:55:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:08.501 06:55:13 -- common/autotest_common.sh@10 -- # set +x 00:29:08.501 06:55:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:08.501 06:55:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:08.501 06:55:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:29:08.501 06:55:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:08.501 06:55:13 -- host/auth.sh@44 -- # digest=sha384 00:29:08.501 06:55:13 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:08.501 06:55:13 -- host/auth.sh@44 -- # keyid=4 00:29:08.501 06:55:13 -- host/auth.sh@45 -- # key=DHHC-1:03:MjMyZmQyNTdlMzY0YmQzNGJkMzY4ZTY2ZmQ0NzQwMTlkZTY1NDY0M2Q0M2EzOWIyM2NmZjljYTZiMDkxN2FiYz3zi2k=: 00:29:08.501 06:55:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:08.501 06:55:13 -- host/auth.sh@48 -- # echo ffdhe3072 00:29:08.501 06:55:13 -- host/auth.sh@49 -- # echo DHHC-1:03:MjMyZmQyNTdlMzY0YmQzNGJkMzY4ZTY2ZmQ0NzQwMTlkZTY1NDY0M2Q0M2EzOWIyM2NmZjljYTZiMDkxN2FiYz3zi2k=: 00:29:08.501 06:55:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:29:08.501 06:55:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:08.501 06:55:13 -- host/auth.sh@68 -- # digest=sha384 00:29:08.501 06:55:13 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:29:08.501 06:55:13 -- host/auth.sh@68 -- # keyid=4 00:29:08.501 06:55:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:08.501 06:55:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:08.501 06:55:13 -- common/autotest_common.sh@10 -- # set +x 00:29:08.501 06:55:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:08.501 06:55:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:08.501 06:55:13 -- nvmf/common.sh@717 -- # local ip 00:29:08.501 06:55:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:08.501 06:55:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:08.501 06:55:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.501 06:55:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.501 06:55:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:08.501 06:55:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.501 06:55:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:08.501 06:55:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:08.501 06:55:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:08.501 06:55:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:08.501 06:55:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:08.501 06:55:13 -- common/autotest_common.sh@10 -- # set +x 00:29:08.759 nvme0n1 00:29:08.759 06:55:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:08.759 06:55:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.759 06:55:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:08.759 06:55:13 -- common/autotest_common.sh@10 -- # set +x 00:29:08.759 06:55:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:08.759 06:55:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:08.759 06:55:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.759 06:55:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.759 06:55:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:08.759 06:55:13 -- common/autotest_common.sh@10 -- # set +x 00:29:08.759 06:55:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:08.759 06:55:13 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:29:08.759 06:55:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:08.759 06:55:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:29:08.759 06:55:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:08.759 06:55:13 -- host/auth.sh@44 -- # digest=sha384 00:29:08.759 06:55:13 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:08.759 06:55:13 -- host/auth.sh@44 -- # keyid=0 00:29:08.759 06:55:13 -- host/auth.sh@45 -- # key=DHHC-1:00:NWY4YTk0ZTg2MzZmMjYyYzhkOTYwZTY3NjQ3Mjk1YWO8+KB+: 00:29:08.759 06:55:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:08.759 06:55:13 -- host/auth.sh@48 -- # echo ffdhe4096 00:29:08.759 06:55:13 -- host/auth.sh@49 -- # echo DHHC-1:00:NWY4YTk0ZTg2MzZmMjYyYzhkOTYwZTY3NjQ3Mjk1YWO8+KB+: 00:29:08.759 06:55:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:29:08.759 06:55:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:08.759 06:55:13 -- host/auth.sh@68 -- # digest=sha384 00:29:08.759 06:55:13 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:29:08.759 06:55:13 -- host/auth.sh@68 -- # keyid=0 00:29:08.759 06:55:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:08.759 06:55:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:08.759 06:55:13 -- common/autotest_common.sh@10 -- # set +x 00:29:08.759 06:55:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:08.759 06:55:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:08.759 06:55:13 -- nvmf/common.sh@717 -- # local ip 00:29:08.759 06:55:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:08.759 06:55:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:08.759 06:55:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.759 06:55:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.759 06:55:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:08.759 06:55:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.759 06:55:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:08.759 06:55:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:08.759 06:55:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:08.759 06:55:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:29:08.759 06:55:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:08.759 06:55:13 -- common/autotest_common.sh@10 -- # set +x 00:29:09.016 nvme0n1 00:29:09.016 06:55:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:09.016 06:55:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.016 06:55:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:09.017 06:55:13 -- common/autotest_common.sh@10 -- # set +x 00:29:09.017 06:55:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:09.017 06:55:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:09.017 06:55:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.017 06:55:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.017 06:55:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:09.017 06:55:13 -- common/autotest_common.sh@10 -- # set +x 00:29:09.274 06:55:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:09.274 06:55:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:09.274 06:55:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:29:09.274 06:55:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:09.274 06:55:13 -- host/auth.sh@44 -- # digest=sha384 00:29:09.274 06:55:13 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:09.274 06:55:13 -- host/auth.sh@44 -- # keyid=1 00:29:09.274 06:55:13 -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:29:09.274 06:55:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:09.274 06:55:13 -- host/auth.sh@48 -- # echo ffdhe4096 00:29:09.274 06:55:13 -- host/auth.sh@49 -- # echo DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:29:09.274 06:55:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:29:09.274 06:55:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:09.274 06:55:13 -- host/auth.sh@68 -- # digest=sha384 00:29:09.274 06:55:13 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:29:09.274 06:55:13 -- host/auth.sh@68 -- # keyid=1 00:29:09.274 06:55:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:09.274 06:55:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:09.274 06:55:13 -- common/autotest_common.sh@10 -- # set +x 00:29:09.274 06:55:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:09.274 06:55:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:09.274 06:55:13 -- nvmf/common.sh@717 -- # local ip 00:29:09.274 06:55:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:09.274 06:55:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:09.274 06:55:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.274 06:55:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.274 06:55:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:09.274 06:55:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.274 06:55:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:09.274 06:55:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:09.274 06:55:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:09.274 06:55:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:29:09.274 06:55:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:09.274 06:55:13 -- common/autotest_common.sh@10 -- # set +x 00:29:09.531 nvme0n1 00:29:09.531 06:55:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:09.531 06:55:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.531 06:55:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:09.531 06:55:13 -- common/autotest_common.sh@10 -- # set +x 00:29:09.531 06:55:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:09.531 06:55:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:09.531 06:55:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.531 06:55:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.531 06:55:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:09.531 06:55:13 -- common/autotest_common.sh@10 -- # set +x 00:29:09.531 06:55:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:09.531 06:55:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:09.531 06:55:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:29:09.531 06:55:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:09.531 06:55:13 -- host/auth.sh@44 -- # digest=sha384 00:29:09.532 06:55:13 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:09.532 06:55:13 -- host/auth.sh@44 -- # keyid=2 00:29:09.532 06:55:13 -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQxNTMwYzNjZGY2ZmQ5NmRjOGYzNmYxYjY4ZjA3NmYINuWZ: 00:29:09.532 06:55:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:09.532 06:55:13 -- host/auth.sh@48 -- # echo ffdhe4096 00:29:09.532 06:55:13 -- host/auth.sh@49 -- # echo DHHC-1:01:ZjQxNTMwYzNjZGY2ZmQ5NmRjOGYzNmYxYjY4ZjA3NmYINuWZ: 00:29:09.532 06:55:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:29:09.532 06:55:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:09.532 06:55:13 -- host/auth.sh@68 -- # digest=sha384 00:29:09.532 06:55:13 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:29:09.532 06:55:13 -- host/auth.sh@68 -- # keyid=2 00:29:09.532 06:55:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:09.532 06:55:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:09.532 06:55:13 -- common/autotest_common.sh@10 -- # set +x 00:29:09.532 06:55:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:09.532 06:55:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:09.532 06:55:13 -- nvmf/common.sh@717 -- # local ip 00:29:09.532 06:55:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:09.532 06:55:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:09.532 06:55:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.532 06:55:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.532 06:55:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:09.532 06:55:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.532 06:55:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:09.532 06:55:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:09.532 06:55:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:09.532 06:55:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:09.532 06:55:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:09.532 06:55:13 -- common/autotest_common.sh@10 -- # set +x 00:29:09.790 nvme0n1 00:29:09.790 06:55:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:09.790 06:55:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.790 06:55:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:09.790 06:55:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:09.790 06:55:14 -- common/autotest_common.sh@10 -- # set +x 00:29:09.790 06:55:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:09.790 06:55:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.790 06:55:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.790 06:55:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:09.790 06:55:14 -- common/autotest_common.sh@10 -- # set +x 00:29:09.790 06:55:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:09.790 06:55:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:09.790 06:55:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:29:09.790 06:55:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:09.790 06:55:14 -- host/auth.sh@44 -- # digest=sha384 00:29:09.790 06:55:14 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:09.790 06:55:14 -- host/auth.sh@44 -- # keyid=3 00:29:09.790 06:55:14 -- host/auth.sh@45 -- # key=DHHC-1:02:ZGNjMGZhNjg3MDFiYTRiN2YxZmZmOGM2MjJkM2M1ZThkYzNiYjBiYzdmOTc5NzNh3+M1UQ==: 00:29:09.790 06:55:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:09.790 06:55:14 -- host/auth.sh@48 -- # echo ffdhe4096 00:29:09.790 06:55:14 -- host/auth.sh@49 -- # echo DHHC-1:02:ZGNjMGZhNjg3MDFiYTRiN2YxZmZmOGM2MjJkM2M1ZThkYzNiYjBiYzdmOTc5NzNh3+M1UQ==: 00:29:09.790 06:55:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:29:09.790 06:55:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:09.790 06:55:14 -- host/auth.sh@68 -- # digest=sha384 00:29:09.790 06:55:14 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:29:09.790 06:55:14 -- host/auth.sh@68 -- # keyid=3 00:29:09.790 06:55:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:09.790 06:55:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:09.790 06:55:14 -- common/autotest_common.sh@10 -- # set +x 00:29:09.790 06:55:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:09.790 06:55:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:09.791 06:55:14 -- nvmf/common.sh@717 -- # local ip 00:29:09.791 06:55:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:09.791 06:55:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:09.791 06:55:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.791 06:55:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.791 06:55:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:09.791 06:55:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.791 06:55:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:09.791 06:55:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:09.791 06:55:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:09.791 06:55:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:29:09.791 06:55:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:09.791 06:55:14 -- common/autotest_common.sh@10 -- # set +x 00:29:10.048 nvme0n1 00:29:10.048 06:55:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:10.048 06:55:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.048 06:55:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:10.048 06:55:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:10.048 06:55:14 -- common/autotest_common.sh@10 -- # set +x 00:29:10.048 06:55:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:10.048 06:55:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.048 06:55:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.048 06:55:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:10.048 06:55:14 -- common/autotest_common.sh@10 -- # set +x 00:29:10.048 06:55:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:10.048 06:55:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:10.048 06:55:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:29:10.048 06:55:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:10.048 06:55:14 -- host/auth.sh@44 -- # digest=sha384 00:29:10.048 06:55:14 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:10.048 06:55:14 -- host/auth.sh@44 -- # keyid=4 00:29:10.048 06:55:14 -- host/auth.sh@45 -- # key=DHHC-1:03:MjMyZmQyNTdlMzY0YmQzNGJkMzY4ZTY2ZmQ0NzQwMTlkZTY1NDY0M2Q0M2EzOWIyM2NmZjljYTZiMDkxN2FiYz3zi2k=: 00:29:10.048 06:55:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:10.048 06:55:14 -- host/auth.sh@48 -- # echo ffdhe4096 00:29:10.048 06:55:14 -- host/auth.sh@49 -- # echo DHHC-1:03:MjMyZmQyNTdlMzY0YmQzNGJkMzY4ZTY2ZmQ0NzQwMTlkZTY1NDY0M2Q0M2EzOWIyM2NmZjljYTZiMDkxN2FiYz3zi2k=: 00:29:10.048 06:55:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:29:10.048 06:55:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:10.048 06:55:14 -- host/auth.sh@68 -- # digest=sha384 00:29:10.048 06:55:14 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:29:10.048 06:55:14 -- host/auth.sh@68 -- # keyid=4 00:29:10.048 06:55:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:10.048 06:55:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:10.048 06:55:14 -- common/autotest_common.sh@10 -- # set +x 00:29:10.048 06:55:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:10.048 06:55:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:10.048 06:55:14 -- nvmf/common.sh@717 -- # local ip 00:29:10.048 06:55:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:10.048 06:55:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:10.048 06:55:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.048 06:55:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.048 06:55:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:10.048 06:55:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.048 06:55:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:10.048 06:55:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:10.048 06:55:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:10.048 06:55:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:10.048 06:55:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:10.048 06:55:14 -- common/autotest_common.sh@10 -- # set +x 00:29:10.304 nvme0n1 00:29:10.304 06:55:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:10.304 06:55:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.304 06:55:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:10.304 06:55:14 -- common/autotest_common.sh@10 -- # set +x 00:29:10.304 06:55:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:10.304 06:55:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:10.304 06:55:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.304 06:55:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.304 06:55:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:10.304 06:55:14 -- common/autotest_common.sh@10 -- # set +x 00:29:10.304 06:55:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:10.304 06:55:14 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:29:10.304 06:55:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:10.304 06:55:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:29:10.304 06:55:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:10.304 06:55:14 -- host/auth.sh@44 -- # digest=sha384 00:29:10.304 06:55:14 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:10.304 06:55:14 -- host/auth.sh@44 -- # keyid=0 00:29:10.304 06:55:14 -- host/auth.sh@45 -- # key=DHHC-1:00:NWY4YTk0ZTg2MzZmMjYyYzhkOTYwZTY3NjQ3Mjk1YWO8+KB+: 00:29:10.304 06:55:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:10.304 06:55:14 -- host/auth.sh@48 -- # echo ffdhe6144 00:29:10.304 06:55:14 -- host/auth.sh@49 -- # echo DHHC-1:00:NWY4YTk0ZTg2MzZmMjYyYzhkOTYwZTY3NjQ3Mjk1YWO8+KB+: 00:29:10.304 06:55:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:29:10.304 06:55:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:10.304 06:55:14 -- host/auth.sh@68 -- # digest=sha384 00:29:10.304 06:55:14 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:29:10.304 06:55:14 -- host/auth.sh@68 -- # keyid=0 00:29:10.304 06:55:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:10.304 06:55:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:10.304 06:55:14 -- common/autotest_common.sh@10 -- # set +x 00:29:10.562 06:55:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:10.562 06:55:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:10.562 06:55:14 -- nvmf/common.sh@717 -- # local ip 00:29:10.562 06:55:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:10.562 06:55:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:10.562 06:55:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.562 06:55:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.562 06:55:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:10.562 06:55:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.562 06:55:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:10.562 06:55:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:10.562 06:55:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:10.562 06:55:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:29:10.562 06:55:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:10.562 06:55:14 -- common/autotest_common.sh@10 -- # set +x 00:29:10.819 nvme0n1 00:29:10.819 06:55:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:10.819 06:55:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.819 06:55:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:10.819 06:55:15 -- common/autotest_common.sh@10 -- # set +x 00:29:10.819 06:55:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:10.819 06:55:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:11.077 06:55:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.077 06:55:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:11.077 06:55:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:11.077 06:55:15 -- common/autotest_common.sh@10 -- # set +x 00:29:11.077 06:55:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:11.077 06:55:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:11.077 06:55:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:29:11.077 06:55:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:11.077 06:55:15 -- host/auth.sh@44 -- # digest=sha384 00:29:11.077 06:55:15 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:11.077 06:55:15 -- host/auth.sh@44 -- # keyid=1 00:29:11.077 06:55:15 -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:29:11.077 06:55:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:11.077 06:55:15 -- host/auth.sh@48 -- # echo ffdhe6144 00:29:11.077 06:55:15 -- host/auth.sh@49 -- # echo DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:29:11.077 06:55:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:29:11.077 06:55:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:11.077 06:55:15 -- host/auth.sh@68 -- # digest=sha384 00:29:11.077 06:55:15 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:29:11.077 06:55:15 -- host/auth.sh@68 -- # keyid=1 00:29:11.077 06:55:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:11.077 06:55:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:11.077 06:55:15 -- common/autotest_common.sh@10 -- # set +x 00:29:11.077 06:55:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:11.077 06:55:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:11.077 06:55:15 -- nvmf/common.sh@717 -- # local ip 00:29:11.077 06:55:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:11.077 06:55:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:11.077 06:55:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.077 06:55:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.077 06:55:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:11.077 06:55:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.077 06:55:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:11.077 06:55:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:11.077 06:55:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:11.077 06:55:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:29:11.077 06:55:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:11.077 06:55:15 -- common/autotest_common.sh@10 -- # set +x 00:29:11.643 nvme0n1 00:29:11.643 06:55:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:11.643 06:55:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.643 06:55:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:11.643 06:55:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:11.643 06:55:15 -- common/autotest_common.sh@10 -- # set +x 00:29:11.643 06:55:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:11.643 06:55:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.643 06:55:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:11.643 06:55:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:11.643 06:55:16 -- common/autotest_common.sh@10 -- # set +x 00:29:11.644 06:55:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:11.644 06:55:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:11.644 06:55:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:29:11.644 06:55:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:11.644 06:55:16 -- host/auth.sh@44 -- # digest=sha384 00:29:11.644 06:55:16 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:11.644 06:55:16 -- host/auth.sh@44 -- # keyid=2 00:29:11.644 06:55:16 -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQxNTMwYzNjZGY2ZmQ5NmRjOGYzNmYxYjY4ZjA3NmYINuWZ: 00:29:11.644 06:55:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:11.644 06:55:16 -- host/auth.sh@48 -- # echo ffdhe6144 00:29:11.644 06:55:16 -- host/auth.sh@49 -- # echo DHHC-1:01:ZjQxNTMwYzNjZGY2ZmQ5NmRjOGYzNmYxYjY4ZjA3NmYINuWZ: 00:29:11.644 06:55:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:29:11.644 06:55:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:11.644 06:55:16 -- host/auth.sh@68 -- # digest=sha384 00:29:11.644 06:55:16 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:29:11.644 06:55:16 -- host/auth.sh@68 -- # keyid=2 00:29:11.644 06:55:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:11.644 06:55:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:11.644 06:55:16 -- common/autotest_common.sh@10 -- # set +x 00:29:11.644 06:55:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:11.644 06:55:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:11.644 06:55:16 -- nvmf/common.sh@717 -- # local ip 00:29:11.644 06:55:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:11.644 06:55:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:11.644 06:55:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.644 06:55:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.644 06:55:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:11.644 06:55:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.644 06:55:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:11.644 06:55:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:11.644 06:55:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:11.644 06:55:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:11.644 06:55:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:11.644 06:55:16 -- common/autotest_common.sh@10 -- # set +x 00:29:11.901 nvme0n1 00:29:11.901 06:55:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:11.901 06:55:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.901 06:55:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:11.901 06:55:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:11.901 06:55:16 -- common/autotest_common.sh@10 -- # set +x 00:29:11.901 06:55:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.159 06:55:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:12.159 06:55:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:12.159 06:55:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.159 06:55:16 -- common/autotest_common.sh@10 -- # set +x 00:29:12.159 06:55:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.159 06:55:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:12.159 06:55:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:29:12.159 06:55:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:12.159 06:55:16 -- host/auth.sh@44 -- # digest=sha384 00:29:12.159 06:55:16 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:12.159 06:55:16 -- host/auth.sh@44 -- # keyid=3 00:29:12.159 06:55:16 -- host/auth.sh@45 -- # key=DHHC-1:02:ZGNjMGZhNjg3MDFiYTRiN2YxZmZmOGM2MjJkM2M1ZThkYzNiYjBiYzdmOTc5NzNh3+M1UQ==: 00:29:12.159 06:55:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:12.159 06:55:16 -- host/auth.sh@48 -- # echo ffdhe6144 00:29:12.159 06:55:16 -- host/auth.sh@49 -- # echo DHHC-1:02:ZGNjMGZhNjg3MDFiYTRiN2YxZmZmOGM2MjJkM2M1ZThkYzNiYjBiYzdmOTc5NzNh3+M1UQ==: 00:29:12.159 06:55:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:29:12.159 06:55:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:12.159 06:55:16 -- host/auth.sh@68 -- # digest=sha384 00:29:12.159 06:55:16 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:29:12.159 06:55:16 -- host/auth.sh@68 -- # keyid=3 00:29:12.159 06:55:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:12.159 06:55:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.159 06:55:16 -- common/autotest_common.sh@10 -- # set +x 00:29:12.159 06:55:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.159 06:55:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:12.160 06:55:16 -- nvmf/common.sh@717 -- # local ip 00:29:12.160 06:55:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:12.160 06:55:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:12.160 06:55:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:12.160 06:55:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:12.160 06:55:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:12.160 06:55:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:12.160 06:55:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:12.160 06:55:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:12.160 06:55:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:12.160 06:55:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:29:12.160 06:55:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.160 06:55:16 -- common/autotest_common.sh@10 -- # set +x 00:29:12.417 nvme0n1 00:29:12.417 06:55:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.675 06:55:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:12.675 06:55:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:12.675 06:55:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.675 06:55:17 -- common/autotest_common.sh@10 -- # set +x 00:29:12.675 06:55:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.675 06:55:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:12.675 06:55:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:12.675 06:55:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.675 06:55:17 -- common/autotest_common.sh@10 -- # set +x 00:29:12.675 06:55:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.675 06:55:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:12.675 06:55:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:29:12.675 06:55:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:12.675 06:55:17 -- host/auth.sh@44 -- # digest=sha384 00:29:12.675 06:55:17 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:12.675 06:55:17 -- host/auth.sh@44 -- # keyid=4 00:29:12.675 06:55:17 -- host/auth.sh@45 -- # key=DHHC-1:03:MjMyZmQyNTdlMzY0YmQzNGJkMzY4ZTY2ZmQ0NzQwMTlkZTY1NDY0M2Q0M2EzOWIyM2NmZjljYTZiMDkxN2FiYz3zi2k=: 00:29:12.676 06:55:17 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:12.676 06:55:17 -- host/auth.sh@48 -- # echo ffdhe6144 00:29:12.676 06:55:17 -- host/auth.sh@49 -- # echo DHHC-1:03:MjMyZmQyNTdlMzY0YmQzNGJkMzY4ZTY2ZmQ0NzQwMTlkZTY1NDY0M2Q0M2EzOWIyM2NmZjljYTZiMDkxN2FiYz3zi2k=: 00:29:12.676 06:55:17 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:29:12.676 06:55:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:12.676 06:55:17 -- host/auth.sh@68 -- # digest=sha384 00:29:12.676 06:55:17 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:29:12.676 06:55:17 -- host/auth.sh@68 -- # keyid=4 00:29:12.676 06:55:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:12.676 06:55:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.676 06:55:17 -- common/autotest_common.sh@10 -- # set +x 00:29:12.676 06:55:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.676 06:55:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:12.676 06:55:17 -- nvmf/common.sh@717 -- # local ip 00:29:12.676 06:55:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:12.676 06:55:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:12.676 06:55:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:12.676 06:55:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:12.676 06:55:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:12.676 06:55:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:12.676 06:55:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:12.676 06:55:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:12.676 06:55:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:12.676 06:55:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:12.676 06:55:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.676 06:55:17 -- common/autotest_common.sh@10 -- # set +x 00:29:13.241 nvme0n1 00:29:13.241 06:55:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:13.241 06:55:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:13.241 06:55:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:13.241 06:55:17 -- common/autotest_common.sh@10 -- # set +x 00:29:13.241 06:55:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:13.242 06:55:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:13.242 06:55:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:13.242 06:55:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:13.242 06:55:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:13.242 06:55:17 -- common/autotest_common.sh@10 -- # set +x 00:29:13.242 06:55:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:13.242 06:55:17 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:29:13.242 06:55:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:13.242 06:55:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:29:13.242 06:55:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:13.242 06:55:17 -- host/auth.sh@44 -- # digest=sha384 00:29:13.242 06:55:17 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:13.242 06:55:17 -- host/auth.sh@44 -- # keyid=0 00:29:13.242 06:55:17 -- host/auth.sh@45 -- # key=DHHC-1:00:NWY4YTk0ZTg2MzZmMjYyYzhkOTYwZTY3NjQ3Mjk1YWO8+KB+: 00:29:13.242 06:55:17 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:13.242 06:55:17 -- host/auth.sh@48 -- # echo ffdhe8192 00:29:13.242 06:55:17 -- host/auth.sh@49 -- # echo DHHC-1:00:NWY4YTk0ZTg2MzZmMjYyYzhkOTYwZTY3NjQ3Mjk1YWO8+KB+: 00:29:13.242 06:55:17 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:29:13.242 06:55:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:13.242 06:55:17 -- host/auth.sh@68 -- # digest=sha384 00:29:13.242 06:55:17 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:29:13.242 06:55:17 -- host/auth.sh@68 -- # keyid=0 00:29:13.242 06:55:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:13.242 06:55:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:13.242 06:55:17 -- common/autotest_common.sh@10 -- # set +x 00:29:13.242 06:55:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:13.242 06:55:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:13.242 06:55:17 -- nvmf/common.sh@717 -- # local ip 00:29:13.242 06:55:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:13.242 06:55:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:13.242 06:55:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.242 06:55:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.242 06:55:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:13.242 06:55:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.242 06:55:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:13.242 06:55:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:13.242 06:55:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:13.242 06:55:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:29:13.242 06:55:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:13.242 06:55:17 -- common/autotest_common.sh@10 -- # set +x 00:29:14.175 nvme0n1 00:29:14.175 06:55:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.175 06:55:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:14.175 06:55:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:14.175 06:55:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.175 06:55:18 -- common/autotest_common.sh@10 -- # set +x 00:29:14.175 06:55:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.175 06:55:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:14.175 06:55:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:14.175 06:55:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.175 06:55:18 -- common/autotest_common.sh@10 -- # set +x 00:29:14.175 06:55:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.175 06:55:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:14.175 06:55:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:29:14.175 06:55:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:14.175 06:55:18 -- host/auth.sh@44 -- # digest=sha384 00:29:14.175 06:55:18 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:14.175 06:55:18 -- host/auth.sh@44 -- # keyid=1 00:29:14.175 06:55:18 -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:29:14.175 06:55:18 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:14.175 06:55:18 -- host/auth.sh@48 -- # echo ffdhe8192 00:29:14.175 06:55:18 -- host/auth.sh@49 -- # echo DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:29:14.175 06:55:18 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:29:14.175 06:55:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:14.175 06:55:18 -- host/auth.sh@68 -- # digest=sha384 00:29:14.175 06:55:18 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:29:14.175 06:55:18 -- host/auth.sh@68 -- # keyid=1 00:29:14.175 06:55:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:14.175 06:55:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.175 06:55:18 -- common/autotest_common.sh@10 -- # set +x 00:29:14.175 06:55:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.176 06:55:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:14.176 06:55:18 -- nvmf/common.sh@717 -- # local ip 00:29:14.176 06:55:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:14.176 06:55:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:14.176 06:55:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:14.176 06:55:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:14.176 06:55:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:14.176 06:55:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:14.176 06:55:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:14.176 06:55:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:14.176 06:55:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:14.176 06:55:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:29:14.176 06:55:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.176 06:55:18 -- common/autotest_common.sh@10 -- # set +x 00:29:15.109 nvme0n1 00:29:15.109 06:55:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:15.109 06:55:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.109 06:55:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:15.109 06:55:19 -- common/autotest_common.sh@10 -- # set +x 00:29:15.109 06:55:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:15.109 06:55:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:15.109 06:55:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.109 06:55:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:15.109 06:55:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:15.109 06:55:19 -- common/autotest_common.sh@10 -- # set +x 00:29:15.109 06:55:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:15.109 06:55:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:15.109 06:55:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:29:15.109 06:55:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:15.109 06:55:19 -- host/auth.sh@44 -- # digest=sha384 00:29:15.109 06:55:19 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:15.109 06:55:19 -- host/auth.sh@44 -- # keyid=2 00:29:15.109 06:55:19 -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQxNTMwYzNjZGY2ZmQ5NmRjOGYzNmYxYjY4ZjA3NmYINuWZ: 00:29:15.109 06:55:19 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:15.109 06:55:19 -- host/auth.sh@48 -- # echo ffdhe8192 00:29:15.109 06:55:19 -- host/auth.sh@49 -- # echo DHHC-1:01:ZjQxNTMwYzNjZGY2ZmQ5NmRjOGYzNmYxYjY4ZjA3NmYINuWZ: 00:29:15.109 06:55:19 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:29:15.109 06:55:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:15.109 06:55:19 -- host/auth.sh@68 -- # digest=sha384 00:29:15.109 06:55:19 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:29:15.109 06:55:19 -- host/auth.sh@68 -- # keyid=2 00:29:15.109 06:55:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:15.109 06:55:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:15.109 06:55:19 -- common/autotest_common.sh@10 -- # set +x 00:29:15.109 06:55:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:15.109 06:55:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:15.109 06:55:19 -- nvmf/common.sh@717 -- # local ip 00:29:15.109 06:55:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:15.109 06:55:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:15.109 06:55:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.109 06:55:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.109 06:55:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:15.109 06:55:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.109 06:55:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:15.109 06:55:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:15.109 06:55:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:15.109 06:55:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:15.109 06:55:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:15.109 06:55:19 -- common/autotest_common.sh@10 -- # set +x 00:29:16.042 nvme0n1 00:29:16.042 06:55:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:16.042 06:55:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.042 06:55:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:16.042 06:55:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:16.042 06:55:20 -- common/autotest_common.sh@10 -- # set +x 00:29:16.042 06:55:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:16.042 06:55:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.042 06:55:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.042 06:55:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:16.042 06:55:20 -- common/autotest_common.sh@10 -- # set +x 00:29:16.042 06:55:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:16.042 06:55:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:16.042 06:55:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:29:16.042 06:55:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:16.042 06:55:20 -- host/auth.sh@44 -- # digest=sha384 00:29:16.042 06:55:20 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:16.042 06:55:20 -- host/auth.sh@44 -- # keyid=3 00:29:16.042 06:55:20 -- host/auth.sh@45 -- # key=DHHC-1:02:ZGNjMGZhNjg3MDFiYTRiN2YxZmZmOGM2MjJkM2M1ZThkYzNiYjBiYzdmOTc5NzNh3+M1UQ==: 00:29:16.042 06:55:20 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:16.042 06:55:20 -- host/auth.sh@48 -- # echo ffdhe8192 00:29:16.042 06:55:20 -- host/auth.sh@49 -- # echo DHHC-1:02:ZGNjMGZhNjg3MDFiYTRiN2YxZmZmOGM2MjJkM2M1ZThkYzNiYjBiYzdmOTc5NzNh3+M1UQ==: 00:29:16.042 06:55:20 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:29:16.042 06:55:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:16.042 06:55:20 -- host/auth.sh@68 -- # digest=sha384 00:29:16.042 06:55:20 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:29:16.042 06:55:20 -- host/auth.sh@68 -- # keyid=3 00:29:16.042 06:55:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:16.042 06:55:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:16.042 06:55:20 -- common/autotest_common.sh@10 -- # set +x 00:29:16.042 06:55:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:16.042 06:55:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:16.042 06:55:20 -- nvmf/common.sh@717 -- # local ip 00:29:16.042 06:55:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:16.042 06:55:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:16.042 06:55:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.042 06:55:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.042 06:55:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:16.042 06:55:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:16.042 06:55:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:16.042 06:55:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:16.042 06:55:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:16.042 06:55:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:29:16.042 06:55:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:16.042 06:55:20 -- common/autotest_common.sh@10 -- # set +x 00:29:16.975 nvme0n1 00:29:16.975 06:55:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:16.975 06:55:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.975 06:55:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:16.975 06:55:21 -- common/autotest_common.sh@10 -- # set +x 00:29:16.975 06:55:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:16.975 06:55:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:16.975 06:55:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.975 06:55:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.975 06:55:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:16.975 06:55:21 -- common/autotest_common.sh@10 -- # set +x 00:29:16.975 06:55:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:16.975 06:55:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:16.975 06:55:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:29:16.975 06:55:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:16.975 06:55:21 -- host/auth.sh@44 -- # digest=sha384 00:29:16.975 06:55:21 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:16.975 06:55:21 -- host/auth.sh@44 -- # keyid=4 00:29:16.975 06:55:21 -- host/auth.sh@45 -- # key=DHHC-1:03:MjMyZmQyNTdlMzY0YmQzNGJkMzY4ZTY2ZmQ0NzQwMTlkZTY1NDY0M2Q0M2EzOWIyM2NmZjljYTZiMDkxN2FiYz3zi2k=: 00:29:16.975 06:55:21 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:29:16.975 06:55:21 -- host/auth.sh@48 -- # echo ffdhe8192 00:29:16.975 06:55:21 -- host/auth.sh@49 -- # echo DHHC-1:03:MjMyZmQyNTdlMzY0YmQzNGJkMzY4ZTY2ZmQ0NzQwMTlkZTY1NDY0M2Q0M2EzOWIyM2NmZjljYTZiMDkxN2FiYz3zi2k=: 00:29:16.975 06:55:21 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:29:16.975 06:55:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:16.975 06:55:21 -- host/auth.sh@68 -- # digest=sha384 00:29:16.975 06:55:21 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:29:16.975 06:55:21 -- host/auth.sh@68 -- # keyid=4 00:29:16.975 06:55:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:16.975 06:55:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:16.975 06:55:21 -- common/autotest_common.sh@10 -- # set +x 00:29:16.975 06:55:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:16.975 06:55:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:16.975 06:55:21 -- nvmf/common.sh@717 -- # local ip 00:29:16.975 06:55:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:16.975 06:55:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:16.975 06:55:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.975 06:55:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.975 06:55:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:16.975 06:55:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:16.975 06:55:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:16.975 06:55:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:16.975 06:55:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:16.975 06:55:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:16.975 06:55:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:16.975 06:55:21 -- common/autotest_common.sh@10 -- # set +x 00:29:17.913 nvme0n1 00:29:17.913 06:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.913 06:55:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.913 06:55:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:17.913 06:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:17.914 06:55:22 -- common/autotest_common.sh@10 -- # set +x 00:29:17.914 06:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.914 06:55:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.914 06:55:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.914 06:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:17.914 06:55:22 -- common/autotest_common.sh@10 -- # set +x 00:29:17.914 06:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.914 06:55:22 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:29:17.914 06:55:22 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:29:17.914 06:55:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:17.914 06:55:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:29:17.914 06:55:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:17.914 06:55:22 -- host/auth.sh@44 -- # digest=sha512 00:29:17.914 06:55:22 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:17.914 06:55:22 -- host/auth.sh@44 -- # keyid=0 00:29:17.914 06:55:22 -- host/auth.sh@45 -- # key=DHHC-1:00:NWY4YTk0ZTg2MzZmMjYyYzhkOTYwZTY3NjQ3Mjk1YWO8+KB+: 00:29:17.914 06:55:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:17.914 06:55:22 -- host/auth.sh@48 -- # echo ffdhe2048 00:29:17.914 06:55:22 -- host/auth.sh@49 -- # echo DHHC-1:00:NWY4YTk0ZTg2MzZmMjYyYzhkOTYwZTY3NjQ3Mjk1YWO8+KB+: 00:29:17.914 06:55:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:29:17.914 06:55:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:17.914 06:55:22 -- host/auth.sh@68 -- # digest=sha512 00:29:17.914 06:55:22 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:29:17.914 06:55:22 -- host/auth.sh@68 -- # keyid=0 00:29:17.914 06:55:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:17.914 06:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:17.914 06:55:22 -- common/autotest_common.sh@10 -- # set +x 00:29:17.914 06:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.914 06:55:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:17.914 06:55:22 -- nvmf/common.sh@717 -- # local ip 00:29:17.914 06:55:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:17.914 06:55:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:17.914 06:55:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.914 06:55:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.914 06:55:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:17.914 06:55:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.914 06:55:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:17.914 06:55:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:17.914 06:55:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:17.914 06:55:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:29:17.914 06:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:17.914 06:55:22 -- common/autotest_common.sh@10 -- # set +x 00:29:18.172 nvme0n1 00:29:18.172 06:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.172 06:55:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.172 06:55:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:18.172 06:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.172 06:55:22 -- common/autotest_common.sh@10 -- # set +x 00:29:18.172 06:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.172 06:55:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.172 06:55:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.172 06:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.172 06:55:22 -- common/autotest_common.sh@10 -- # set +x 00:29:18.172 06:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.172 06:55:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:18.172 06:55:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:29:18.172 06:55:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:18.172 06:55:22 -- host/auth.sh@44 -- # digest=sha512 00:29:18.172 06:55:22 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:18.172 06:55:22 -- host/auth.sh@44 -- # keyid=1 00:29:18.172 06:55:22 -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:29:18.172 06:55:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:18.172 06:55:22 -- host/auth.sh@48 -- # echo ffdhe2048 00:29:18.172 06:55:22 -- host/auth.sh@49 -- # echo DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:29:18.172 06:55:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:29:18.172 06:55:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:18.172 06:55:22 -- host/auth.sh@68 -- # digest=sha512 00:29:18.172 06:55:22 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:29:18.172 06:55:22 -- host/auth.sh@68 -- # keyid=1 00:29:18.172 06:55:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:18.172 06:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.172 06:55:22 -- common/autotest_common.sh@10 -- # set +x 00:29:18.172 06:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.172 06:55:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:18.172 06:55:22 -- nvmf/common.sh@717 -- # local ip 00:29:18.172 06:55:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:18.172 06:55:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:18.172 06:55:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.172 06:55:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.172 06:55:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:18.172 06:55:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.172 06:55:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:18.172 06:55:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:18.172 06:55:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:18.172 06:55:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:29:18.172 06:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.172 06:55:22 -- common/autotest_common.sh@10 -- # set +x 00:29:18.172 nvme0n1 00:29:18.172 06:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.431 06:55:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.431 06:55:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:18.431 06:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.431 06:55:22 -- common/autotest_common.sh@10 -- # set +x 00:29:18.431 06:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.431 06:55:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.431 06:55:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.431 06:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.431 06:55:22 -- common/autotest_common.sh@10 -- # set +x 00:29:18.431 06:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.431 06:55:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:18.431 06:55:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:29:18.431 06:55:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:18.431 06:55:22 -- host/auth.sh@44 -- # digest=sha512 00:29:18.431 06:55:22 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:18.431 06:55:22 -- host/auth.sh@44 -- # keyid=2 00:29:18.431 06:55:22 -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQxNTMwYzNjZGY2ZmQ5NmRjOGYzNmYxYjY4ZjA3NmYINuWZ: 00:29:18.431 06:55:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:18.431 06:55:22 -- host/auth.sh@48 -- # echo ffdhe2048 00:29:18.431 06:55:22 -- host/auth.sh@49 -- # echo DHHC-1:01:ZjQxNTMwYzNjZGY2ZmQ5NmRjOGYzNmYxYjY4ZjA3NmYINuWZ: 00:29:18.431 06:55:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:29:18.431 06:55:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:18.431 06:55:22 -- host/auth.sh@68 -- # digest=sha512 00:29:18.431 06:55:22 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:29:18.431 06:55:22 -- host/auth.sh@68 -- # keyid=2 00:29:18.431 06:55:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:18.431 06:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.431 06:55:22 -- common/autotest_common.sh@10 -- # set +x 00:29:18.431 06:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.431 06:55:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:18.431 06:55:22 -- nvmf/common.sh@717 -- # local ip 00:29:18.431 06:55:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:18.431 06:55:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:18.431 06:55:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.431 06:55:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.431 06:55:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:18.431 06:55:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.431 06:55:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:18.431 06:55:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:18.431 06:55:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:18.431 06:55:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:18.431 06:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.431 06:55:22 -- common/autotest_common.sh@10 -- # set +x 00:29:18.431 nvme0n1 00:29:18.431 06:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.431 06:55:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.431 06:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.431 06:55:22 -- common/autotest_common.sh@10 -- # set +x 00:29:18.431 06:55:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:18.431 06:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.431 06:55:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.431 06:55:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.431 06:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.431 06:55:23 -- common/autotest_common.sh@10 -- # set +x 00:29:18.431 06:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.431 06:55:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:18.431 06:55:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:29:18.431 06:55:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:18.431 06:55:23 -- host/auth.sh@44 -- # digest=sha512 00:29:18.431 06:55:23 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:18.431 06:55:23 -- host/auth.sh@44 -- # keyid=3 00:29:18.431 06:55:23 -- host/auth.sh@45 -- # key=DHHC-1:02:ZGNjMGZhNjg3MDFiYTRiN2YxZmZmOGM2MjJkM2M1ZThkYzNiYjBiYzdmOTc5NzNh3+M1UQ==: 00:29:18.431 06:55:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:18.431 06:55:23 -- host/auth.sh@48 -- # echo ffdhe2048 00:29:18.431 06:55:23 -- host/auth.sh@49 -- # echo DHHC-1:02:ZGNjMGZhNjg3MDFiYTRiN2YxZmZmOGM2MjJkM2M1ZThkYzNiYjBiYzdmOTc5NzNh3+M1UQ==: 00:29:18.431 06:55:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:29:18.431 06:55:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:18.431 06:55:23 -- host/auth.sh@68 -- # digest=sha512 00:29:18.431 06:55:23 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:29:18.431 06:55:23 -- host/auth.sh@68 -- # keyid=3 00:29:18.431 06:55:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:18.431 06:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.431 06:55:23 -- common/autotest_common.sh@10 -- # set +x 00:29:18.431 06:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.431 06:55:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:18.431 06:55:23 -- nvmf/common.sh@717 -- # local ip 00:29:18.431 06:55:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:18.431 06:55:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:18.431 06:55:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.431 06:55:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.431 06:55:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:18.431 06:55:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.431 06:55:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:18.431 06:55:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:18.431 06:55:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:18.431 06:55:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:29:18.431 06:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.431 06:55:23 -- common/autotest_common.sh@10 -- # set +x 00:29:18.690 nvme0n1 00:29:18.690 06:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.690 06:55:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.690 06:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.690 06:55:23 -- common/autotest_common.sh@10 -- # set +x 00:29:18.690 06:55:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:18.690 06:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.690 06:55:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.690 06:55:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.690 06:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.690 06:55:23 -- common/autotest_common.sh@10 -- # set +x 00:29:18.690 06:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.690 06:55:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:18.690 06:55:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:29:18.690 06:55:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:18.690 06:55:23 -- host/auth.sh@44 -- # digest=sha512 00:29:18.690 06:55:23 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:18.690 06:55:23 -- host/auth.sh@44 -- # keyid=4 00:29:18.690 06:55:23 -- host/auth.sh@45 -- # key=DHHC-1:03:MjMyZmQyNTdlMzY0YmQzNGJkMzY4ZTY2ZmQ0NzQwMTlkZTY1NDY0M2Q0M2EzOWIyM2NmZjljYTZiMDkxN2FiYz3zi2k=: 00:29:18.690 06:55:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:18.690 06:55:23 -- host/auth.sh@48 -- # echo ffdhe2048 00:29:18.690 06:55:23 -- host/auth.sh@49 -- # echo DHHC-1:03:MjMyZmQyNTdlMzY0YmQzNGJkMzY4ZTY2ZmQ0NzQwMTlkZTY1NDY0M2Q0M2EzOWIyM2NmZjljYTZiMDkxN2FiYz3zi2k=: 00:29:18.690 06:55:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:29:18.690 06:55:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:18.690 06:55:23 -- host/auth.sh@68 -- # digest=sha512 00:29:18.690 06:55:23 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:29:18.690 06:55:23 -- host/auth.sh@68 -- # keyid=4 00:29:18.690 06:55:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:18.690 06:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.690 06:55:23 -- common/autotest_common.sh@10 -- # set +x 00:29:18.690 06:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.690 06:55:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:18.690 06:55:23 -- nvmf/common.sh@717 -- # local ip 00:29:18.690 06:55:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:18.690 06:55:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:18.690 06:55:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.690 06:55:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.690 06:55:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:18.690 06:55:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.690 06:55:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:18.690 06:55:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:18.690 06:55:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:18.690 06:55:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:18.690 06:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.690 06:55:23 -- common/autotest_common.sh@10 -- # set +x 00:29:18.948 nvme0n1 00:29:18.948 06:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.948 06:55:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.948 06:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.948 06:55:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:18.948 06:55:23 -- common/autotest_common.sh@10 -- # set +x 00:29:18.948 06:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.948 06:55:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.948 06:55:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.948 06:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.948 06:55:23 -- common/autotest_common.sh@10 -- # set +x 00:29:18.948 06:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.948 06:55:23 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:29:18.948 06:55:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:18.948 06:55:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:29:18.948 06:55:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:18.948 06:55:23 -- host/auth.sh@44 -- # digest=sha512 00:29:18.948 06:55:23 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:18.948 06:55:23 -- host/auth.sh@44 -- # keyid=0 00:29:18.948 06:55:23 -- host/auth.sh@45 -- # key=DHHC-1:00:NWY4YTk0ZTg2MzZmMjYyYzhkOTYwZTY3NjQ3Mjk1YWO8+KB+: 00:29:18.948 06:55:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:18.948 06:55:23 -- host/auth.sh@48 -- # echo ffdhe3072 00:29:18.948 06:55:23 -- host/auth.sh@49 -- # echo DHHC-1:00:NWY4YTk0ZTg2MzZmMjYyYzhkOTYwZTY3NjQ3Mjk1YWO8+KB+: 00:29:18.948 06:55:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:29:18.948 06:55:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:18.948 06:55:23 -- host/auth.sh@68 -- # digest=sha512 00:29:18.948 06:55:23 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:29:18.948 06:55:23 -- host/auth.sh@68 -- # keyid=0 00:29:18.948 06:55:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:18.948 06:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.948 06:55:23 -- common/autotest_common.sh@10 -- # set +x 00:29:18.948 06:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.948 06:55:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:18.948 06:55:23 -- nvmf/common.sh@717 -- # local ip 00:29:18.948 06:55:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:18.948 06:55:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:18.948 06:55:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.948 06:55:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.948 06:55:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:18.948 06:55:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.948 06:55:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:18.948 06:55:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:18.948 06:55:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:18.948 06:55:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:29:18.948 06:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.948 06:55:23 -- common/autotest_common.sh@10 -- # set +x 00:29:19.207 nvme0n1 00:29:19.207 06:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.207 06:55:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.207 06:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.207 06:55:23 -- common/autotest_common.sh@10 -- # set +x 00:29:19.207 06:55:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:19.207 06:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.207 06:55:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.207 06:55:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.207 06:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.207 06:55:23 -- common/autotest_common.sh@10 -- # set +x 00:29:19.207 06:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.207 06:55:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:19.207 06:55:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:29:19.207 06:55:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:19.207 06:55:23 -- host/auth.sh@44 -- # digest=sha512 00:29:19.207 06:55:23 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:19.207 06:55:23 -- host/auth.sh@44 -- # keyid=1 00:29:19.207 06:55:23 -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:29:19.207 06:55:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:19.207 06:55:23 -- host/auth.sh@48 -- # echo ffdhe3072 00:29:19.207 06:55:23 -- host/auth.sh@49 -- # echo DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:29:19.207 06:55:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:29:19.207 06:55:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:19.207 06:55:23 -- host/auth.sh@68 -- # digest=sha512 00:29:19.207 06:55:23 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:29:19.207 06:55:23 -- host/auth.sh@68 -- # keyid=1 00:29:19.207 06:55:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:19.207 06:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.207 06:55:23 -- common/autotest_common.sh@10 -- # set +x 00:29:19.207 06:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.207 06:55:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:19.207 06:55:23 -- nvmf/common.sh@717 -- # local ip 00:29:19.207 06:55:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:19.207 06:55:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:19.207 06:55:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.207 06:55:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.207 06:55:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:19.207 06:55:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.207 06:55:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:19.207 06:55:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:19.207 06:55:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:19.207 06:55:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:29:19.207 06:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.207 06:55:23 -- common/autotest_common.sh@10 -- # set +x 00:29:19.207 nvme0n1 00:29:19.207 06:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.207 06:55:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.207 06:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.207 06:55:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:19.207 06:55:23 -- common/autotest_common.sh@10 -- # set +x 00:29:19.207 06:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.465 06:55:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.465 06:55:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.465 06:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.465 06:55:23 -- common/autotest_common.sh@10 -- # set +x 00:29:19.465 06:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.465 06:55:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:19.465 06:55:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:29:19.465 06:55:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:19.465 06:55:23 -- host/auth.sh@44 -- # digest=sha512 00:29:19.465 06:55:23 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:19.465 06:55:23 -- host/auth.sh@44 -- # keyid=2 00:29:19.465 06:55:23 -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQxNTMwYzNjZGY2ZmQ5NmRjOGYzNmYxYjY4ZjA3NmYINuWZ: 00:29:19.465 06:55:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:19.465 06:55:23 -- host/auth.sh@48 -- # echo ffdhe3072 00:29:19.465 06:55:23 -- host/auth.sh@49 -- # echo DHHC-1:01:ZjQxNTMwYzNjZGY2ZmQ5NmRjOGYzNmYxYjY4ZjA3NmYINuWZ: 00:29:19.465 06:55:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:29:19.465 06:55:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:19.465 06:55:23 -- host/auth.sh@68 -- # digest=sha512 00:29:19.465 06:55:23 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:29:19.465 06:55:23 -- host/auth.sh@68 -- # keyid=2 00:29:19.465 06:55:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:19.465 06:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.465 06:55:23 -- common/autotest_common.sh@10 -- # set +x 00:29:19.465 06:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.465 06:55:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:19.465 06:55:23 -- nvmf/common.sh@717 -- # local ip 00:29:19.465 06:55:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:19.465 06:55:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:19.465 06:55:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.465 06:55:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.465 06:55:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:19.465 06:55:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.465 06:55:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:19.465 06:55:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:19.465 06:55:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:19.465 06:55:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:19.465 06:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.465 06:55:23 -- common/autotest_common.sh@10 -- # set +x 00:29:19.465 nvme0n1 00:29:19.465 06:55:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.465 06:55:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.465 06:55:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.465 06:55:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:19.465 06:55:24 -- common/autotest_common.sh@10 -- # set +x 00:29:19.465 06:55:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.465 06:55:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.465 06:55:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.465 06:55:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.465 06:55:24 -- common/autotest_common.sh@10 -- # set +x 00:29:19.465 06:55:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.465 06:55:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:19.465 06:55:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:29:19.465 06:55:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:19.465 06:55:24 -- host/auth.sh@44 -- # digest=sha512 00:29:19.465 06:55:24 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:19.465 06:55:24 -- host/auth.sh@44 -- # keyid=3 00:29:19.465 06:55:24 -- host/auth.sh@45 -- # key=DHHC-1:02:ZGNjMGZhNjg3MDFiYTRiN2YxZmZmOGM2MjJkM2M1ZThkYzNiYjBiYzdmOTc5NzNh3+M1UQ==: 00:29:19.465 06:55:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:19.465 06:55:24 -- host/auth.sh@48 -- # echo ffdhe3072 00:29:19.465 06:55:24 -- host/auth.sh@49 -- # echo DHHC-1:02:ZGNjMGZhNjg3MDFiYTRiN2YxZmZmOGM2MjJkM2M1ZThkYzNiYjBiYzdmOTc5NzNh3+M1UQ==: 00:29:19.465 06:55:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:29:19.465 06:55:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:19.465 06:55:24 -- host/auth.sh@68 -- # digest=sha512 00:29:19.465 06:55:24 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:29:19.465 06:55:24 -- host/auth.sh@68 -- # keyid=3 00:29:19.465 06:55:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:19.465 06:55:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.465 06:55:24 -- common/autotest_common.sh@10 -- # set +x 00:29:19.723 06:55:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.723 06:55:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:19.723 06:55:24 -- nvmf/common.sh@717 -- # local ip 00:29:19.723 06:55:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:19.723 06:55:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:19.723 06:55:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.723 06:55:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.723 06:55:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:19.723 06:55:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.723 06:55:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:19.723 06:55:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:19.723 06:55:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:19.723 06:55:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:29:19.723 06:55:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.723 06:55:24 -- common/autotest_common.sh@10 -- # set +x 00:29:19.723 nvme0n1 00:29:19.723 06:55:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.723 06:55:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.723 06:55:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:19.723 06:55:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.723 06:55:24 -- common/autotest_common.sh@10 -- # set +x 00:29:19.723 06:55:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.723 06:55:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.723 06:55:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.723 06:55:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.723 06:55:24 -- common/autotest_common.sh@10 -- # set +x 00:29:19.723 06:55:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.723 06:55:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:19.723 06:55:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:29:19.723 06:55:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:19.723 06:55:24 -- host/auth.sh@44 -- # digest=sha512 00:29:19.723 06:55:24 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:19.723 06:55:24 -- host/auth.sh@44 -- # keyid=4 00:29:19.723 06:55:24 -- host/auth.sh@45 -- # key=DHHC-1:03:MjMyZmQyNTdlMzY0YmQzNGJkMzY4ZTY2ZmQ0NzQwMTlkZTY1NDY0M2Q0M2EzOWIyM2NmZjljYTZiMDkxN2FiYz3zi2k=: 00:29:19.723 06:55:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:19.723 06:55:24 -- host/auth.sh@48 -- # echo ffdhe3072 00:29:19.723 06:55:24 -- host/auth.sh@49 -- # echo DHHC-1:03:MjMyZmQyNTdlMzY0YmQzNGJkMzY4ZTY2ZmQ0NzQwMTlkZTY1NDY0M2Q0M2EzOWIyM2NmZjljYTZiMDkxN2FiYz3zi2k=: 00:29:19.723 06:55:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:29:19.723 06:55:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:19.723 06:55:24 -- host/auth.sh@68 -- # digest=sha512 00:29:19.723 06:55:24 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:29:19.723 06:55:24 -- host/auth.sh@68 -- # keyid=4 00:29:19.723 06:55:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:19.723 06:55:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.723 06:55:24 -- common/autotest_common.sh@10 -- # set +x 00:29:19.723 06:55:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.723 06:55:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:19.723 06:55:24 -- nvmf/common.sh@717 -- # local ip 00:29:19.723 06:55:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:19.723 06:55:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:19.723 06:55:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.724 06:55:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.724 06:55:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:19.724 06:55:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.724 06:55:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:19.724 06:55:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:19.724 06:55:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:19.724 06:55:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:19.724 06:55:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.724 06:55:24 -- common/autotest_common.sh@10 -- # set +x 00:29:19.982 nvme0n1 00:29:19.982 06:55:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.982 06:55:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.982 06:55:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:19.982 06:55:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.982 06:55:24 -- common/autotest_common.sh@10 -- # set +x 00:29:19.982 06:55:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.982 06:55:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.982 06:55:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.982 06:55:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.982 06:55:24 -- common/autotest_common.sh@10 -- # set +x 00:29:19.982 06:55:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.982 06:55:24 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:29:19.982 06:55:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:19.982 06:55:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:29:19.982 06:55:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:19.982 06:55:24 -- host/auth.sh@44 -- # digest=sha512 00:29:19.982 06:55:24 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:19.982 06:55:24 -- host/auth.sh@44 -- # keyid=0 00:29:19.982 06:55:24 -- host/auth.sh@45 -- # key=DHHC-1:00:NWY4YTk0ZTg2MzZmMjYyYzhkOTYwZTY3NjQ3Mjk1YWO8+KB+: 00:29:19.982 06:55:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:19.982 06:55:24 -- host/auth.sh@48 -- # echo ffdhe4096 00:29:19.982 06:55:24 -- host/auth.sh@49 -- # echo DHHC-1:00:NWY4YTk0ZTg2MzZmMjYyYzhkOTYwZTY3NjQ3Mjk1YWO8+KB+: 00:29:19.982 06:55:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:29:19.982 06:55:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:19.982 06:55:24 -- host/auth.sh@68 -- # digest=sha512 00:29:19.982 06:55:24 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:29:19.982 06:55:24 -- host/auth.sh@68 -- # keyid=0 00:29:19.982 06:55:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:19.982 06:55:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.982 06:55:24 -- common/autotest_common.sh@10 -- # set +x 00:29:19.982 06:55:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.982 06:55:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:19.982 06:55:24 -- nvmf/common.sh@717 -- # local ip 00:29:19.982 06:55:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:19.982 06:55:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:19.982 06:55:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.982 06:55:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.982 06:55:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:19.982 06:55:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.982 06:55:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:19.982 06:55:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:19.982 06:55:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:19.982 06:55:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:29:19.982 06:55:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.982 06:55:24 -- common/autotest_common.sh@10 -- # set +x 00:29:20.240 nvme0n1 00:29:20.240 06:55:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.240 06:55:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.240 06:55:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:20.240 06:55:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.240 06:55:24 -- common/autotest_common.sh@10 -- # set +x 00:29:20.240 06:55:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.240 06:55:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.240 06:55:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.240 06:55:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.240 06:55:24 -- common/autotest_common.sh@10 -- # set +x 00:29:20.240 06:55:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.240 06:55:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:20.240 06:55:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:29:20.240 06:55:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:20.240 06:55:24 -- host/auth.sh@44 -- # digest=sha512 00:29:20.240 06:55:24 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:20.240 06:55:24 -- host/auth.sh@44 -- # keyid=1 00:29:20.240 06:55:24 -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:29:20.240 06:55:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:20.240 06:55:24 -- host/auth.sh@48 -- # echo ffdhe4096 00:29:20.240 06:55:24 -- host/auth.sh@49 -- # echo DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:29:20.240 06:55:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:29:20.240 06:55:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:20.240 06:55:24 -- host/auth.sh@68 -- # digest=sha512 00:29:20.240 06:55:24 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:29:20.241 06:55:24 -- host/auth.sh@68 -- # keyid=1 00:29:20.241 06:55:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:20.241 06:55:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.241 06:55:24 -- common/autotest_common.sh@10 -- # set +x 00:29:20.498 06:55:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.498 06:55:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:20.498 06:55:24 -- nvmf/common.sh@717 -- # local ip 00:29:20.498 06:55:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:20.498 06:55:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:20.498 06:55:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.498 06:55:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.498 06:55:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:20.498 06:55:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.498 06:55:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:20.498 06:55:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:20.498 06:55:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:20.498 06:55:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:29:20.498 06:55:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.498 06:55:24 -- common/autotest_common.sh@10 -- # set +x 00:29:20.756 nvme0n1 00:29:20.756 06:55:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.756 06:55:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.756 06:55:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.756 06:55:25 -- common/autotest_common.sh@10 -- # set +x 00:29:20.756 06:55:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:20.756 06:55:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.756 06:55:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.756 06:55:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.756 06:55:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.756 06:55:25 -- common/autotest_common.sh@10 -- # set +x 00:29:20.756 06:55:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.756 06:55:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:20.756 06:55:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:29:20.756 06:55:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:20.756 06:55:25 -- host/auth.sh@44 -- # digest=sha512 00:29:20.756 06:55:25 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:20.756 06:55:25 -- host/auth.sh@44 -- # keyid=2 00:29:20.756 06:55:25 -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQxNTMwYzNjZGY2ZmQ5NmRjOGYzNmYxYjY4ZjA3NmYINuWZ: 00:29:20.756 06:55:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:20.756 06:55:25 -- host/auth.sh@48 -- # echo ffdhe4096 00:29:20.756 06:55:25 -- host/auth.sh@49 -- # echo DHHC-1:01:ZjQxNTMwYzNjZGY2ZmQ5NmRjOGYzNmYxYjY4ZjA3NmYINuWZ: 00:29:20.756 06:55:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:29:20.756 06:55:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:20.756 06:55:25 -- host/auth.sh@68 -- # digest=sha512 00:29:20.756 06:55:25 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:29:20.756 06:55:25 -- host/auth.sh@68 -- # keyid=2 00:29:20.756 06:55:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:20.756 06:55:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.756 06:55:25 -- common/autotest_common.sh@10 -- # set +x 00:29:20.756 06:55:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.756 06:55:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:20.757 06:55:25 -- nvmf/common.sh@717 -- # local ip 00:29:20.757 06:55:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:20.757 06:55:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:20.757 06:55:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.757 06:55:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.757 06:55:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:20.757 06:55:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.757 06:55:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:20.757 06:55:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:20.757 06:55:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:20.757 06:55:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:20.757 06:55:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.757 06:55:25 -- common/autotest_common.sh@10 -- # set +x 00:29:21.014 nvme0n1 00:29:21.014 06:55:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.014 06:55:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.014 06:55:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.014 06:55:25 -- common/autotest_common.sh@10 -- # set +x 00:29:21.014 06:55:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:21.014 06:55:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.014 06:55:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.014 06:55:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.014 06:55:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.014 06:55:25 -- common/autotest_common.sh@10 -- # set +x 00:29:21.014 06:55:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.014 06:55:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:21.014 06:55:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:29:21.014 06:55:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:21.014 06:55:25 -- host/auth.sh@44 -- # digest=sha512 00:29:21.014 06:55:25 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:21.014 06:55:25 -- host/auth.sh@44 -- # keyid=3 00:29:21.014 06:55:25 -- host/auth.sh@45 -- # key=DHHC-1:02:ZGNjMGZhNjg3MDFiYTRiN2YxZmZmOGM2MjJkM2M1ZThkYzNiYjBiYzdmOTc5NzNh3+M1UQ==: 00:29:21.014 06:55:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:21.014 06:55:25 -- host/auth.sh@48 -- # echo ffdhe4096 00:29:21.014 06:55:25 -- host/auth.sh@49 -- # echo DHHC-1:02:ZGNjMGZhNjg3MDFiYTRiN2YxZmZmOGM2MjJkM2M1ZThkYzNiYjBiYzdmOTc5NzNh3+M1UQ==: 00:29:21.014 06:55:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:29:21.014 06:55:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:21.014 06:55:25 -- host/auth.sh@68 -- # digest=sha512 00:29:21.014 06:55:25 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:29:21.014 06:55:25 -- host/auth.sh@68 -- # keyid=3 00:29:21.014 06:55:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:21.014 06:55:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.014 06:55:25 -- common/autotest_common.sh@10 -- # set +x 00:29:21.014 06:55:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.014 06:55:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:21.014 06:55:25 -- nvmf/common.sh@717 -- # local ip 00:29:21.014 06:55:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:21.014 06:55:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:21.014 06:55:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.014 06:55:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.014 06:55:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:21.014 06:55:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.014 06:55:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:21.014 06:55:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:21.014 06:55:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:21.014 06:55:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:29:21.014 06:55:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.014 06:55:25 -- common/autotest_common.sh@10 -- # set +x 00:29:21.272 nvme0n1 00:29:21.272 06:55:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.272 06:55:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.272 06:55:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.272 06:55:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:21.272 06:55:25 -- common/autotest_common.sh@10 -- # set +x 00:29:21.272 06:55:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.272 06:55:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.272 06:55:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.272 06:55:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.272 06:55:25 -- common/autotest_common.sh@10 -- # set +x 00:29:21.272 06:55:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.272 06:55:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:21.272 06:55:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:29:21.272 06:55:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:21.272 06:55:25 -- host/auth.sh@44 -- # digest=sha512 00:29:21.272 06:55:25 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:21.272 06:55:25 -- host/auth.sh@44 -- # keyid=4 00:29:21.272 06:55:25 -- host/auth.sh@45 -- # key=DHHC-1:03:MjMyZmQyNTdlMzY0YmQzNGJkMzY4ZTY2ZmQ0NzQwMTlkZTY1NDY0M2Q0M2EzOWIyM2NmZjljYTZiMDkxN2FiYz3zi2k=: 00:29:21.272 06:55:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:21.272 06:55:25 -- host/auth.sh@48 -- # echo ffdhe4096 00:29:21.272 06:55:25 -- host/auth.sh@49 -- # echo DHHC-1:03:MjMyZmQyNTdlMzY0YmQzNGJkMzY4ZTY2ZmQ0NzQwMTlkZTY1NDY0M2Q0M2EzOWIyM2NmZjljYTZiMDkxN2FiYz3zi2k=: 00:29:21.272 06:55:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:29:21.272 06:55:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:21.272 06:55:25 -- host/auth.sh@68 -- # digest=sha512 00:29:21.272 06:55:25 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:29:21.272 06:55:25 -- host/auth.sh@68 -- # keyid=4 00:29:21.272 06:55:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:21.272 06:55:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.272 06:55:25 -- common/autotest_common.sh@10 -- # set +x 00:29:21.529 06:55:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.529 06:55:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:21.529 06:55:25 -- nvmf/common.sh@717 -- # local ip 00:29:21.529 06:55:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:21.529 06:55:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:21.529 06:55:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.529 06:55:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.529 06:55:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:21.529 06:55:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.529 06:55:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:21.529 06:55:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:21.529 06:55:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:21.529 06:55:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:21.529 06:55:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.529 06:55:25 -- common/autotest_common.sh@10 -- # set +x 00:29:21.786 nvme0n1 00:29:21.786 06:55:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.786 06:55:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.786 06:55:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.786 06:55:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:21.786 06:55:26 -- common/autotest_common.sh@10 -- # set +x 00:29:21.786 06:55:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.786 06:55:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.786 06:55:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.787 06:55:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.787 06:55:26 -- common/autotest_common.sh@10 -- # set +x 00:29:21.787 06:55:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.787 06:55:26 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:29:21.787 06:55:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:21.787 06:55:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:29:21.787 06:55:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:21.787 06:55:26 -- host/auth.sh@44 -- # digest=sha512 00:29:21.787 06:55:26 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:21.787 06:55:26 -- host/auth.sh@44 -- # keyid=0 00:29:21.787 06:55:26 -- host/auth.sh@45 -- # key=DHHC-1:00:NWY4YTk0ZTg2MzZmMjYyYzhkOTYwZTY3NjQ3Mjk1YWO8+KB+: 00:29:21.787 06:55:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:21.787 06:55:26 -- host/auth.sh@48 -- # echo ffdhe6144 00:29:21.787 06:55:26 -- host/auth.sh@49 -- # echo DHHC-1:00:NWY4YTk0ZTg2MzZmMjYyYzhkOTYwZTY3NjQ3Mjk1YWO8+KB+: 00:29:21.787 06:55:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:29:21.787 06:55:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:21.787 06:55:26 -- host/auth.sh@68 -- # digest=sha512 00:29:21.787 06:55:26 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:29:21.787 06:55:26 -- host/auth.sh@68 -- # keyid=0 00:29:21.787 06:55:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:21.787 06:55:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.787 06:55:26 -- common/autotest_common.sh@10 -- # set +x 00:29:21.787 06:55:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.787 06:55:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:21.787 06:55:26 -- nvmf/common.sh@717 -- # local ip 00:29:21.787 06:55:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:21.787 06:55:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:21.787 06:55:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.787 06:55:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.787 06:55:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:21.787 06:55:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.787 06:55:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:21.787 06:55:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:21.787 06:55:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:21.787 06:55:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:29:21.787 06:55:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.787 06:55:26 -- common/autotest_common.sh@10 -- # set +x 00:29:22.351 nvme0n1 00:29:22.351 06:55:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.351 06:55:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.351 06:55:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.351 06:55:26 -- common/autotest_common.sh@10 -- # set +x 00:29:22.351 06:55:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:22.351 06:55:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.351 06:55:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.351 06:55:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.351 06:55:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.351 06:55:26 -- common/autotest_common.sh@10 -- # set +x 00:29:22.351 06:55:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.351 06:55:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:22.351 06:55:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:29:22.351 06:55:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:22.351 06:55:26 -- host/auth.sh@44 -- # digest=sha512 00:29:22.351 06:55:26 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:22.351 06:55:26 -- host/auth.sh@44 -- # keyid=1 00:29:22.351 06:55:26 -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:29:22.351 06:55:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:22.351 06:55:26 -- host/auth.sh@48 -- # echo ffdhe6144 00:29:22.351 06:55:26 -- host/auth.sh@49 -- # echo DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:29:22.351 06:55:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:29:22.351 06:55:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:22.351 06:55:26 -- host/auth.sh@68 -- # digest=sha512 00:29:22.351 06:55:26 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:29:22.351 06:55:26 -- host/auth.sh@68 -- # keyid=1 00:29:22.351 06:55:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:22.351 06:55:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.351 06:55:26 -- common/autotest_common.sh@10 -- # set +x 00:29:22.351 06:55:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.351 06:55:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:22.352 06:55:26 -- nvmf/common.sh@717 -- # local ip 00:29:22.352 06:55:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:22.352 06:55:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:22.352 06:55:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.352 06:55:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.352 06:55:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:22.352 06:55:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.352 06:55:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:22.352 06:55:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:22.352 06:55:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:22.352 06:55:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:29:22.352 06:55:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.352 06:55:26 -- common/autotest_common.sh@10 -- # set +x 00:29:22.916 nvme0n1 00:29:22.916 06:55:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.916 06:55:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.916 06:55:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.916 06:55:27 -- common/autotest_common.sh@10 -- # set +x 00:29:22.916 06:55:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:22.916 06:55:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.916 06:55:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.916 06:55:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.916 06:55:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.916 06:55:27 -- common/autotest_common.sh@10 -- # set +x 00:29:22.916 06:55:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.916 06:55:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:22.916 06:55:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:29:22.916 06:55:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:22.916 06:55:27 -- host/auth.sh@44 -- # digest=sha512 00:29:22.916 06:55:27 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:22.916 06:55:27 -- host/auth.sh@44 -- # keyid=2 00:29:22.916 06:55:27 -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQxNTMwYzNjZGY2ZmQ5NmRjOGYzNmYxYjY4ZjA3NmYINuWZ: 00:29:22.916 06:55:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:22.916 06:55:27 -- host/auth.sh@48 -- # echo ffdhe6144 00:29:22.916 06:55:27 -- host/auth.sh@49 -- # echo DHHC-1:01:ZjQxNTMwYzNjZGY2ZmQ5NmRjOGYzNmYxYjY4ZjA3NmYINuWZ: 00:29:22.916 06:55:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:29:22.916 06:55:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:22.916 06:55:27 -- host/auth.sh@68 -- # digest=sha512 00:29:22.916 06:55:27 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:29:22.916 06:55:27 -- host/auth.sh@68 -- # keyid=2 00:29:22.916 06:55:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:22.916 06:55:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.916 06:55:27 -- common/autotest_common.sh@10 -- # set +x 00:29:22.916 06:55:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.916 06:55:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:22.916 06:55:27 -- nvmf/common.sh@717 -- # local ip 00:29:22.916 06:55:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:22.916 06:55:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:22.916 06:55:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.916 06:55:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.916 06:55:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:22.916 06:55:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.916 06:55:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:22.916 06:55:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:22.916 06:55:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:22.916 06:55:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:22.916 06:55:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.916 06:55:27 -- common/autotest_common.sh@10 -- # set +x 00:29:23.482 nvme0n1 00:29:23.482 06:55:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.482 06:55:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.482 06:55:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:23.482 06:55:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.482 06:55:27 -- common/autotest_common.sh@10 -- # set +x 00:29:23.482 06:55:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.482 06:55:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.482 06:55:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.482 06:55:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.482 06:55:27 -- common/autotest_common.sh@10 -- # set +x 00:29:23.482 06:55:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.482 06:55:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:23.482 06:55:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:29:23.482 06:55:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:23.482 06:55:27 -- host/auth.sh@44 -- # digest=sha512 00:29:23.482 06:55:27 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:23.482 06:55:27 -- host/auth.sh@44 -- # keyid=3 00:29:23.482 06:55:27 -- host/auth.sh@45 -- # key=DHHC-1:02:ZGNjMGZhNjg3MDFiYTRiN2YxZmZmOGM2MjJkM2M1ZThkYzNiYjBiYzdmOTc5NzNh3+M1UQ==: 00:29:23.482 06:55:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:23.482 06:55:27 -- host/auth.sh@48 -- # echo ffdhe6144 00:29:23.482 06:55:27 -- host/auth.sh@49 -- # echo DHHC-1:02:ZGNjMGZhNjg3MDFiYTRiN2YxZmZmOGM2MjJkM2M1ZThkYzNiYjBiYzdmOTc5NzNh3+M1UQ==: 00:29:23.482 06:55:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:29:23.482 06:55:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:23.482 06:55:27 -- host/auth.sh@68 -- # digest=sha512 00:29:23.482 06:55:27 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:29:23.482 06:55:27 -- host/auth.sh@68 -- # keyid=3 00:29:23.482 06:55:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:23.482 06:55:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.482 06:55:27 -- common/autotest_common.sh@10 -- # set +x 00:29:23.482 06:55:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.482 06:55:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:23.482 06:55:27 -- nvmf/common.sh@717 -- # local ip 00:29:23.482 06:55:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:23.482 06:55:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:23.482 06:55:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.482 06:55:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.482 06:55:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:23.482 06:55:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:23.482 06:55:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:23.482 06:55:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:23.482 06:55:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:23.482 06:55:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:29:23.482 06:55:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.482 06:55:27 -- common/autotest_common.sh@10 -- # set +x 00:29:24.048 nvme0n1 00:29:24.048 06:55:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:24.048 06:55:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.048 06:55:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:24.048 06:55:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:24.048 06:55:28 -- common/autotest_common.sh@10 -- # set +x 00:29:24.048 06:55:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:24.048 06:55:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.048 06:55:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.048 06:55:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:24.048 06:55:28 -- common/autotest_common.sh@10 -- # set +x 00:29:24.048 06:55:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:24.048 06:55:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:24.048 06:55:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:29:24.048 06:55:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:24.048 06:55:28 -- host/auth.sh@44 -- # digest=sha512 00:29:24.048 06:55:28 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:24.048 06:55:28 -- host/auth.sh@44 -- # keyid=4 00:29:24.048 06:55:28 -- host/auth.sh@45 -- # key=DHHC-1:03:MjMyZmQyNTdlMzY0YmQzNGJkMzY4ZTY2ZmQ0NzQwMTlkZTY1NDY0M2Q0M2EzOWIyM2NmZjljYTZiMDkxN2FiYz3zi2k=: 00:29:24.048 06:55:28 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:24.048 06:55:28 -- host/auth.sh@48 -- # echo ffdhe6144 00:29:24.048 06:55:28 -- host/auth.sh@49 -- # echo DHHC-1:03:MjMyZmQyNTdlMzY0YmQzNGJkMzY4ZTY2ZmQ0NzQwMTlkZTY1NDY0M2Q0M2EzOWIyM2NmZjljYTZiMDkxN2FiYz3zi2k=: 00:29:24.048 06:55:28 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:29:24.048 06:55:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:24.048 06:55:28 -- host/auth.sh@68 -- # digest=sha512 00:29:24.048 06:55:28 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:29:24.048 06:55:28 -- host/auth.sh@68 -- # keyid=4 00:29:24.048 06:55:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:24.048 06:55:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:24.048 06:55:28 -- common/autotest_common.sh@10 -- # set +x 00:29:24.048 06:55:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:24.048 06:55:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:24.048 06:55:28 -- nvmf/common.sh@717 -- # local ip 00:29:24.048 06:55:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:24.048 06:55:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:24.048 06:55:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.048 06:55:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.048 06:55:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:24.048 06:55:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.048 06:55:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:24.048 06:55:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:24.048 06:55:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:24.048 06:55:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:24.048 06:55:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:24.048 06:55:28 -- common/autotest_common.sh@10 -- # set +x 00:29:24.614 nvme0n1 00:29:24.614 06:55:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:24.614 06:55:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.614 06:55:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:24.614 06:55:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:24.614 06:55:29 -- common/autotest_common.sh@10 -- # set +x 00:29:24.614 06:55:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:24.614 06:55:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.614 06:55:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.614 06:55:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:24.614 06:55:29 -- common/autotest_common.sh@10 -- # set +x 00:29:24.614 06:55:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:24.614 06:55:29 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:29:24.614 06:55:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:24.614 06:55:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:29:24.614 06:55:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:24.614 06:55:29 -- host/auth.sh@44 -- # digest=sha512 00:29:24.614 06:55:29 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:24.614 06:55:29 -- host/auth.sh@44 -- # keyid=0 00:29:24.614 06:55:29 -- host/auth.sh@45 -- # key=DHHC-1:00:NWY4YTk0ZTg2MzZmMjYyYzhkOTYwZTY3NjQ3Mjk1YWO8+KB+: 00:29:24.614 06:55:29 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:24.614 06:55:29 -- host/auth.sh@48 -- # echo ffdhe8192 00:29:24.614 06:55:29 -- host/auth.sh@49 -- # echo DHHC-1:00:NWY4YTk0ZTg2MzZmMjYyYzhkOTYwZTY3NjQ3Mjk1YWO8+KB+: 00:29:24.614 06:55:29 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:29:24.614 06:55:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:24.614 06:55:29 -- host/auth.sh@68 -- # digest=sha512 00:29:24.614 06:55:29 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:29:24.614 06:55:29 -- host/auth.sh@68 -- # keyid=0 00:29:24.614 06:55:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:24.614 06:55:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:24.614 06:55:29 -- common/autotest_common.sh@10 -- # set +x 00:29:24.614 06:55:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:24.614 06:55:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:24.614 06:55:29 -- nvmf/common.sh@717 -- # local ip 00:29:24.614 06:55:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:24.614 06:55:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:24.614 06:55:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.614 06:55:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.614 06:55:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:24.614 06:55:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.614 06:55:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:24.614 06:55:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:24.614 06:55:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:24.614 06:55:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:29:24.614 06:55:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:24.614 06:55:29 -- common/autotest_common.sh@10 -- # set +x 00:29:25.547 nvme0n1 00:29:25.547 06:55:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:25.547 06:55:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.547 06:55:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:25.547 06:55:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:25.547 06:55:30 -- common/autotest_common.sh@10 -- # set +x 00:29:25.547 06:55:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:25.547 06:55:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.547 06:55:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.547 06:55:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:25.547 06:55:30 -- common/autotest_common.sh@10 -- # set +x 00:29:25.547 06:55:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:25.547 06:55:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:25.547 06:55:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:29:25.547 06:55:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:25.547 06:55:30 -- host/auth.sh@44 -- # digest=sha512 00:29:25.547 06:55:30 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:25.547 06:55:30 -- host/auth.sh@44 -- # keyid=1 00:29:25.547 06:55:30 -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:29:25.547 06:55:30 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:25.547 06:55:30 -- host/auth.sh@48 -- # echo ffdhe8192 00:29:25.547 06:55:30 -- host/auth.sh@49 -- # echo DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:29:25.547 06:55:30 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:29:25.547 06:55:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:25.547 06:55:30 -- host/auth.sh@68 -- # digest=sha512 00:29:25.547 06:55:30 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:29:25.547 06:55:30 -- host/auth.sh@68 -- # keyid=1 00:29:25.547 06:55:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:25.547 06:55:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:25.547 06:55:30 -- common/autotest_common.sh@10 -- # set +x 00:29:25.547 06:55:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:25.547 06:55:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:25.547 06:55:30 -- nvmf/common.sh@717 -- # local ip 00:29:25.547 06:55:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:25.547 06:55:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:25.547 06:55:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.547 06:55:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.547 06:55:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:25.547 06:55:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:25.547 06:55:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:25.547 06:55:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:25.547 06:55:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:25.547 06:55:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:29:25.547 06:55:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:25.547 06:55:30 -- common/autotest_common.sh@10 -- # set +x 00:29:26.479 nvme0n1 00:29:26.479 06:55:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:26.479 06:55:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.479 06:55:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:26.479 06:55:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:26.479 06:55:31 -- common/autotest_common.sh@10 -- # set +x 00:29:26.737 06:55:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:26.737 06:55:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.737 06:55:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.737 06:55:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:26.737 06:55:31 -- common/autotest_common.sh@10 -- # set +x 00:29:26.737 06:55:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:26.737 06:55:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:26.737 06:55:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:29:26.737 06:55:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:26.737 06:55:31 -- host/auth.sh@44 -- # digest=sha512 00:29:26.737 06:55:31 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:26.737 06:55:31 -- host/auth.sh@44 -- # keyid=2 00:29:26.737 06:55:31 -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQxNTMwYzNjZGY2ZmQ5NmRjOGYzNmYxYjY4ZjA3NmYINuWZ: 00:29:26.737 06:55:31 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:26.737 06:55:31 -- host/auth.sh@48 -- # echo ffdhe8192 00:29:26.737 06:55:31 -- host/auth.sh@49 -- # echo DHHC-1:01:ZjQxNTMwYzNjZGY2ZmQ5NmRjOGYzNmYxYjY4ZjA3NmYINuWZ: 00:29:26.737 06:55:31 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:29:26.737 06:55:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:26.737 06:55:31 -- host/auth.sh@68 -- # digest=sha512 00:29:26.737 06:55:31 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:29:26.737 06:55:31 -- host/auth.sh@68 -- # keyid=2 00:29:26.737 06:55:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:26.737 06:55:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:26.737 06:55:31 -- common/autotest_common.sh@10 -- # set +x 00:29:26.737 06:55:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:26.737 06:55:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:26.737 06:55:31 -- nvmf/common.sh@717 -- # local ip 00:29:26.737 06:55:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:26.737 06:55:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:26.737 06:55:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.737 06:55:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.737 06:55:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:26.737 06:55:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:26.737 06:55:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:26.737 06:55:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:26.737 06:55:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:26.737 06:55:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:26.737 06:55:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:26.737 06:55:31 -- common/autotest_common.sh@10 -- # set +x 00:29:27.670 nvme0n1 00:29:27.670 06:55:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:27.670 06:55:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.670 06:55:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:27.670 06:55:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:27.670 06:55:32 -- common/autotest_common.sh@10 -- # set +x 00:29:27.670 06:55:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:27.670 06:55:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.670 06:55:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.670 06:55:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:27.670 06:55:32 -- common/autotest_common.sh@10 -- # set +x 00:29:27.670 06:55:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:27.670 06:55:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:27.670 06:55:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:29:27.670 06:55:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:27.670 06:55:32 -- host/auth.sh@44 -- # digest=sha512 00:29:27.670 06:55:32 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:27.670 06:55:32 -- host/auth.sh@44 -- # keyid=3 00:29:27.670 06:55:32 -- host/auth.sh@45 -- # key=DHHC-1:02:ZGNjMGZhNjg3MDFiYTRiN2YxZmZmOGM2MjJkM2M1ZThkYzNiYjBiYzdmOTc5NzNh3+M1UQ==: 00:29:27.670 06:55:32 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:27.670 06:55:32 -- host/auth.sh@48 -- # echo ffdhe8192 00:29:27.670 06:55:32 -- host/auth.sh@49 -- # echo DHHC-1:02:ZGNjMGZhNjg3MDFiYTRiN2YxZmZmOGM2MjJkM2M1ZThkYzNiYjBiYzdmOTc5NzNh3+M1UQ==: 00:29:27.670 06:55:32 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:29:27.670 06:55:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:27.670 06:55:32 -- host/auth.sh@68 -- # digest=sha512 00:29:27.670 06:55:32 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:29:27.670 06:55:32 -- host/auth.sh@68 -- # keyid=3 00:29:27.670 06:55:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:27.670 06:55:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:27.670 06:55:32 -- common/autotest_common.sh@10 -- # set +x 00:29:27.670 06:55:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:27.670 06:55:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:27.670 06:55:32 -- nvmf/common.sh@717 -- # local ip 00:29:27.670 06:55:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:27.670 06:55:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:27.670 06:55:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.670 06:55:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.670 06:55:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:27.670 06:55:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.670 06:55:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:27.670 06:55:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:27.670 06:55:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:27.670 06:55:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:29:27.670 06:55:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:27.670 06:55:32 -- common/autotest_common.sh@10 -- # set +x 00:29:28.603 nvme0n1 00:29:28.603 06:55:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:28.603 06:55:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.603 06:55:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:28.603 06:55:33 -- common/autotest_common.sh@10 -- # set +x 00:29:28.603 06:55:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:28.603 06:55:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:28.603 06:55:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.603 06:55:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.603 06:55:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:28.603 06:55:33 -- common/autotest_common.sh@10 -- # set +x 00:29:28.603 06:55:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:28.603 06:55:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:29:28.603 06:55:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:29:28.603 06:55:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:28.603 06:55:33 -- host/auth.sh@44 -- # digest=sha512 00:29:28.603 06:55:33 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:28.603 06:55:33 -- host/auth.sh@44 -- # keyid=4 00:29:28.603 06:55:33 -- host/auth.sh@45 -- # key=DHHC-1:03:MjMyZmQyNTdlMzY0YmQzNGJkMzY4ZTY2ZmQ0NzQwMTlkZTY1NDY0M2Q0M2EzOWIyM2NmZjljYTZiMDkxN2FiYz3zi2k=: 00:29:28.603 06:55:33 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:29:28.603 06:55:33 -- host/auth.sh@48 -- # echo ffdhe8192 00:29:28.603 06:55:33 -- host/auth.sh@49 -- # echo DHHC-1:03:MjMyZmQyNTdlMzY0YmQzNGJkMzY4ZTY2ZmQ0NzQwMTlkZTY1NDY0M2Q0M2EzOWIyM2NmZjljYTZiMDkxN2FiYz3zi2k=: 00:29:28.603 06:55:33 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:29:28.603 06:55:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:29:28.603 06:55:33 -- host/auth.sh@68 -- # digest=sha512 00:29:28.603 06:55:33 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:29:28.603 06:55:33 -- host/auth.sh@68 -- # keyid=4 00:29:28.603 06:55:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:28.603 06:55:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:28.603 06:55:33 -- common/autotest_common.sh@10 -- # set +x 00:29:28.603 06:55:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:28.603 06:55:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:29:28.603 06:55:33 -- nvmf/common.sh@717 -- # local ip 00:29:28.603 06:55:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:28.603 06:55:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:28.603 06:55:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.603 06:55:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.603 06:55:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:28.603 06:55:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.603 06:55:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:28.603 06:55:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:28.603 06:55:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:28.603 06:55:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:28.603 06:55:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:28.603 06:55:33 -- common/autotest_common.sh@10 -- # set +x 00:29:29.536 nvme0n1 00:29:29.536 06:55:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:29.536 06:55:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.536 06:55:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:29.536 06:55:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:29:29.536 06:55:34 -- common/autotest_common.sh@10 -- # set +x 00:29:29.536 06:55:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:29.536 06:55:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.536 06:55:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.536 06:55:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:29.536 06:55:34 -- common/autotest_common.sh@10 -- # set +x 00:29:29.536 06:55:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:29.536 06:55:34 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:29.536 06:55:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:29:29.536 06:55:34 -- host/auth.sh@44 -- # digest=sha256 00:29:29.536 06:55:34 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:29.536 06:55:34 -- host/auth.sh@44 -- # keyid=1 00:29:29.536 06:55:34 -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:29:29.536 06:55:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:29:29.536 06:55:34 -- host/auth.sh@48 -- # echo ffdhe2048 00:29:29.536 06:55:34 -- host/auth.sh@49 -- # echo DHHC-1:00:YzcwODI3NjAxMDFiNDcwMWFkZTEwYTNiZWJjYzY4YzEyOTkwZmVlNWM1NDA3NTA58kv5Ow==: 00:29:29.536 06:55:34 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:29.536 06:55:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:29.536 06:55:34 -- common/autotest_common.sh@10 -- # set +x 00:29:29.536 06:55:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:29.536 06:55:34 -- host/auth.sh@119 -- # get_main_ns_ip 00:29:29.536 06:55:34 -- nvmf/common.sh@717 -- # local ip 00:29:29.536 06:55:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:29.536 06:55:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:29.536 06:55:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:29.536 06:55:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:29.536 06:55:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:29.536 06:55:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:29.536 06:55:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:29.536 06:55:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:29.536 06:55:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:29.536 06:55:34 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:29.536 06:55:34 -- common/autotest_common.sh@638 -- # local es=0 00:29:29.536 06:55:34 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:29.536 06:55:34 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:29.536 06:55:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:29.536 06:55:34 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:29.536 06:55:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:29.536 06:55:34 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:29.536 06:55:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:29.536 06:55:34 -- common/autotest_common.sh@10 -- # set +x 00:29:29.536 request: 00:29:29.536 { 00:29:29.536 "name": "nvme0", 00:29:29.536 "trtype": "tcp", 00:29:29.536 "traddr": "10.0.0.1", 00:29:29.536 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:29.536 "adrfam": "ipv4", 00:29:29.536 "trsvcid": "4420", 00:29:29.536 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:29.536 "method": "bdev_nvme_attach_controller", 00:29:29.536 "req_id": 1 00:29:29.536 } 00:29:29.536 Got JSON-RPC error response 00:29:29.536 response: 00:29:29.536 { 00:29:29.536 "code": -32602, 00:29:29.536 "message": "Invalid parameters" 00:29:29.536 } 00:29:29.536 06:55:34 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:29.536 06:55:34 -- common/autotest_common.sh@641 -- # es=1 00:29:29.536 06:55:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:29.536 06:55:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:29.536 06:55:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:29.536 06:55:34 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.536 06:55:34 -- host/auth.sh@121 -- # jq length 00:29:29.536 06:55:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:29.536 06:55:34 -- common/autotest_common.sh@10 -- # set +x 00:29:29.809 06:55:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:29.810 06:55:34 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:29:29.810 06:55:34 -- host/auth.sh@124 -- # get_main_ns_ip 00:29:29.810 06:55:34 -- nvmf/common.sh@717 -- # local ip 00:29:29.810 06:55:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:29.810 06:55:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:29.810 06:55:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:29.810 06:55:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:29.810 06:55:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:29.810 06:55:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:29.810 06:55:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:29.810 06:55:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:29.810 06:55:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:29.810 06:55:34 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:29.810 06:55:34 -- common/autotest_common.sh@638 -- # local es=0 00:29:29.810 06:55:34 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:29.810 06:55:34 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:29.810 06:55:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:29.810 06:55:34 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:29.810 06:55:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:29.810 06:55:34 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:29.810 06:55:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:29.810 06:55:34 -- common/autotest_common.sh@10 -- # set +x 00:29:29.810 request: 00:29:29.810 { 00:29:29.810 "name": "nvme0", 00:29:29.810 "trtype": "tcp", 00:29:29.810 "traddr": "10.0.0.1", 00:29:29.810 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:29.810 "adrfam": "ipv4", 00:29:29.810 "trsvcid": "4420", 00:29:29.810 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:29.810 "dhchap_key": "key2", 00:29:29.810 "method": "bdev_nvme_attach_controller", 00:29:29.810 "req_id": 1 00:29:29.810 } 00:29:29.810 Got JSON-RPC error response 00:29:29.810 response: 00:29:29.810 { 00:29:29.810 "code": -32602, 00:29:29.810 "message": "Invalid parameters" 00:29:29.810 } 00:29:29.810 06:55:34 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:29.810 06:55:34 -- common/autotest_common.sh@641 -- # es=1 00:29:29.810 06:55:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:29.810 06:55:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:29.810 06:55:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:29.810 06:55:34 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.810 06:55:34 -- host/auth.sh@127 -- # jq length 00:29:29.810 06:55:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:29.810 06:55:34 -- common/autotest_common.sh@10 -- # set +x 00:29:29.810 06:55:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:29.810 06:55:34 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:29:29.810 06:55:34 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:29:29.810 06:55:34 -- host/auth.sh@130 -- # cleanup 00:29:29.810 06:55:34 -- host/auth.sh@24 -- # nvmftestfini 00:29:29.810 06:55:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:29.810 06:55:34 -- nvmf/common.sh@117 -- # sync 00:29:29.810 06:55:34 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:29.810 06:55:34 -- nvmf/common.sh@120 -- # set +e 00:29:29.810 06:55:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:29.810 06:55:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:29.810 rmmod nvme_tcp 00:29:29.810 rmmod nvme_fabrics 00:29:29.810 06:55:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:29.810 06:55:34 -- nvmf/common.sh@124 -- # set -e 00:29:29.810 06:55:34 -- nvmf/common.sh@125 -- # return 0 00:29:29.810 06:55:34 -- nvmf/common.sh@478 -- # '[' -n 104424 ']' 00:29:29.810 06:55:34 -- nvmf/common.sh@479 -- # killprocess 104424 00:29:29.810 06:55:34 -- common/autotest_common.sh@936 -- # '[' -z 104424 ']' 00:29:29.810 06:55:34 -- common/autotest_common.sh@940 -- # kill -0 104424 00:29:29.810 06:55:34 -- common/autotest_common.sh@941 -- # uname 00:29:29.810 06:55:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:29.810 06:55:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 104424 00:29:29.810 06:55:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:29.810 06:55:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:29.810 06:55:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 104424' 00:29:29.810 killing process with pid 104424 00:29:29.810 06:55:34 -- common/autotest_common.sh@955 -- # kill 104424 00:29:29.810 06:55:34 -- common/autotest_common.sh@960 -- # wait 104424 00:29:30.077 06:55:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:30.077 06:55:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:30.077 06:55:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:30.077 06:55:34 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:30.077 06:55:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:30.077 06:55:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.077 06:55:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:30.077 06:55:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.982 06:55:36 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:31.982 06:55:36 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:31.982 06:55:36 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:31.983 06:55:36 -- host/auth.sh@27 -- # clean_kernel_target 00:29:31.983 06:55:36 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:31.983 06:55:36 -- nvmf/common.sh@675 -- # echo 0 00:29:31.983 06:55:36 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:31.983 06:55:36 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:31.983 06:55:36 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:31.983 06:55:36 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:31.983 06:55:36 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:29:31.983 06:55:36 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:29:32.240 06:55:36 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:33.174 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:33.174 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:33.174 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:33.174 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:33.174 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:33.174 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:33.174 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:33.174 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:33.174 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:33.174 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:33.174 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:33.174 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:33.174 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:33.174 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:33.174 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:33.174 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:34.552 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:29:34.553 06:55:38 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.X1U /tmp/spdk.key-null.paX /tmp/spdk.key-sha256.T5h /tmp/spdk.key-sha384.r1Y /tmp/spdk.key-sha512.5rZ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:29:34.553 06:55:38 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:35.486 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:29:35.486 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:29:35.486 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:29:35.486 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:29:35.486 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:29:35.486 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:29:35.486 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:29:35.486 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:29:35.486 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:29:35.486 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:29:35.486 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:29:35.486 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:29:35.486 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:29:35.486 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:29:35.486 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:29:35.486 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:29:35.486 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:29:35.743 00:29:35.744 real 0m46.052s 00:29:35.744 user 0m43.858s 00:29:35.744 sys 0m5.511s 00:29:35.744 06:55:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:35.744 06:55:40 -- common/autotest_common.sh@10 -- # set +x 00:29:35.744 ************************************ 00:29:35.744 END TEST nvmf_auth 00:29:35.744 ************************************ 00:29:35.744 06:55:40 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:29:35.744 06:55:40 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:35.744 06:55:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:35.744 06:55:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:35.744 06:55:40 -- common/autotest_common.sh@10 -- # set +x 00:29:35.744 ************************************ 00:29:35.744 START TEST nvmf_digest 00:29:35.744 ************************************ 00:29:35.744 06:55:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:35.744 * Looking for test storage... 00:29:35.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:35.744 06:55:40 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:35.744 06:55:40 -- nvmf/common.sh@7 -- # uname -s 00:29:35.744 06:55:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:35.744 06:55:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:35.744 06:55:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:35.744 06:55:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:35.744 06:55:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:35.744 06:55:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:35.744 06:55:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:35.744 06:55:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:35.744 06:55:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:35.744 06:55:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:35.744 06:55:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:35.744 06:55:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:35.744 06:55:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:35.744 06:55:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:35.744 06:55:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:35.744 06:55:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:35.744 06:55:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:35.744 06:55:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:35.744 06:55:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:35.744 06:55:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:35.744 06:55:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.744 06:55:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.744 06:55:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.744 06:55:40 -- paths/export.sh@5 -- # export PATH 00:29:35.744 06:55:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.744 06:55:40 -- nvmf/common.sh@47 -- # : 0 00:29:35.744 06:55:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:35.744 06:55:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:35.744 06:55:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:35.744 06:55:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:35.744 06:55:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:35.744 06:55:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:35.744 06:55:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:35.744 06:55:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:35.744 06:55:40 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:35.744 06:55:40 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:35.744 06:55:40 -- host/digest.sh@16 -- # runtime=2 00:29:35.744 06:55:40 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:35.744 06:55:40 -- host/digest.sh@138 -- # nvmftestinit 00:29:35.744 06:55:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:35.744 06:55:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:35.744 06:55:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:35.744 06:55:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:35.744 06:55:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:35.744 06:55:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.744 06:55:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:35.744 06:55:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:35.744 06:55:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:29:35.744 06:55:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:29:35.744 06:55:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:35.744 06:55:40 -- common/autotest_common.sh@10 -- # set +x 00:29:38.275 06:55:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:38.275 06:55:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:38.275 06:55:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:38.275 06:55:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:38.275 06:55:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:38.275 06:55:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:38.275 06:55:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:38.275 06:55:42 -- nvmf/common.sh@295 -- # net_devs=() 00:29:38.275 06:55:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:38.275 06:55:42 -- nvmf/common.sh@296 -- # e810=() 00:29:38.275 06:55:42 -- nvmf/common.sh@296 -- # local -ga e810 00:29:38.275 06:55:42 -- nvmf/common.sh@297 -- # x722=() 00:29:38.275 06:55:42 -- nvmf/common.sh@297 -- # local -ga x722 00:29:38.275 06:55:42 -- nvmf/common.sh@298 -- # mlx=() 00:29:38.275 06:55:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:38.275 06:55:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:38.275 06:55:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:38.275 06:55:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:38.275 06:55:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:38.275 06:55:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:38.275 06:55:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:38.275 06:55:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:38.275 06:55:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:38.275 06:55:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:38.275 06:55:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:38.275 06:55:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:38.275 06:55:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:38.275 06:55:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:38.275 06:55:42 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:38.275 06:55:42 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:38.275 06:55:42 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:38.275 06:55:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:38.275 06:55:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:38.275 06:55:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:38.275 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:38.275 06:55:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:38.275 06:55:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:38.275 06:55:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:38.275 06:55:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:38.275 06:55:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:38.275 06:55:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:38.275 06:55:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:38.275 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:38.275 06:55:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:38.275 06:55:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:38.275 06:55:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:38.275 06:55:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:38.275 06:55:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:38.275 06:55:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:38.275 06:55:42 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:38.275 06:55:42 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:38.275 06:55:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:38.275 06:55:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:38.275 06:55:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:38.275 06:55:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:38.275 06:55:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:38.275 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:38.275 06:55:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:38.275 06:55:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:38.275 06:55:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:38.275 06:55:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:38.275 06:55:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:38.275 06:55:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:38.275 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:38.275 06:55:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:38.275 06:55:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:29:38.275 06:55:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:29:38.275 06:55:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:29:38.275 06:55:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:29:38.275 06:55:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:29:38.275 06:55:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:38.275 06:55:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:38.275 06:55:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:38.275 06:55:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:38.275 06:55:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:38.275 06:55:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:38.275 06:55:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:38.275 06:55:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:38.275 06:55:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:38.275 06:55:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:38.275 06:55:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:38.275 06:55:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:38.275 06:55:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:38.275 06:55:42 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:38.275 06:55:42 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:38.275 06:55:42 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:38.275 06:55:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:38.275 06:55:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:38.275 06:55:42 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:38.275 06:55:42 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:38.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:38.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:29:38.275 00:29:38.275 --- 10.0.0.2 ping statistics --- 00:29:38.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:38.275 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:29:38.275 06:55:42 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:38.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:38.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:29:38.275 00:29:38.275 --- 10.0.0.1 ping statistics --- 00:29:38.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:38.275 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:29:38.275 06:55:42 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:38.275 06:55:42 -- nvmf/common.sh@411 -- # return 0 00:29:38.275 06:55:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:38.275 06:55:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:38.275 06:55:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:38.275 06:55:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:38.275 06:55:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:38.275 06:55:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:38.275 06:55:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:38.275 06:55:42 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:38.275 06:55:42 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:38.275 06:55:42 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:38.275 06:55:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:38.275 06:55:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:38.275 06:55:42 -- common/autotest_common.sh@10 -- # set +x 00:29:38.275 ************************************ 00:29:38.275 START TEST nvmf_digest_clean 00:29:38.275 ************************************ 00:29:38.275 06:55:42 -- common/autotest_common.sh@1111 -- # run_digest 00:29:38.275 06:55:42 -- host/digest.sh@120 -- # local dsa_initiator 00:29:38.275 06:55:42 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:38.275 06:55:42 -- host/digest.sh@121 -- # dsa_initiator=false 00:29:38.275 06:55:42 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:38.275 06:55:42 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:38.275 06:55:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:38.275 06:55:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:38.275 06:55:42 -- common/autotest_common.sh@10 -- # set +x 00:29:38.275 06:55:42 -- nvmf/common.sh@470 -- # nvmfpid=113512 00:29:38.275 06:55:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:38.275 06:55:42 -- nvmf/common.sh@471 -- # waitforlisten 113512 00:29:38.275 06:55:42 -- common/autotest_common.sh@817 -- # '[' -z 113512 ']' 00:29:38.275 06:55:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:38.275 06:55:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:38.275 06:55:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:38.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:38.275 06:55:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:38.275 06:55:42 -- common/autotest_common.sh@10 -- # set +x 00:29:38.275 [2024-04-17 06:55:42.595034] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:29:38.275 [2024-04-17 06:55:42.595107] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:38.275 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.275 [2024-04-17 06:55:42.660342] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.275 [2024-04-17 06:55:42.746873] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:38.276 [2024-04-17 06:55:42.746939] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:38.276 [2024-04-17 06:55:42.746964] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:38.276 [2024-04-17 06:55:42.746978] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:38.276 [2024-04-17 06:55:42.746989] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:38.276 [2024-04-17 06:55:42.747018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.276 06:55:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:38.276 06:55:42 -- common/autotest_common.sh@850 -- # return 0 00:29:38.276 06:55:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:38.276 06:55:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:38.276 06:55:42 -- common/autotest_common.sh@10 -- # set +x 00:29:38.276 06:55:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:38.276 06:55:42 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:38.276 06:55:42 -- host/digest.sh@126 -- # common_target_config 00:29:38.276 06:55:42 -- host/digest.sh@43 -- # rpc_cmd 00:29:38.276 06:55:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:38.276 06:55:42 -- common/autotest_common.sh@10 -- # set +x 00:29:38.535 null0 00:29:38.535 [2024-04-17 06:55:42.927655] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:38.535 [2024-04-17 06:55:42.951821] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:38.535 06:55:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:38.535 06:55:42 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:38.535 06:55:42 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:38.535 06:55:42 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:38.535 06:55:42 -- host/digest.sh@80 -- # rw=randread 00:29:38.535 06:55:42 -- host/digest.sh@80 -- # bs=4096 00:29:38.535 06:55:42 -- host/digest.sh@80 -- # qd=128 00:29:38.535 06:55:42 -- host/digest.sh@80 -- # scan_dsa=false 00:29:38.535 06:55:42 -- host/digest.sh@83 -- # bperfpid=113536 00:29:38.535 06:55:42 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:38.535 06:55:42 -- host/digest.sh@84 -- # waitforlisten 113536 /var/tmp/bperf.sock 00:29:38.535 06:55:42 -- common/autotest_common.sh@817 -- # '[' -z 113536 ']' 00:29:38.535 06:55:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:38.535 06:55:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:38.535 06:55:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:38.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:38.535 06:55:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:38.535 06:55:42 -- common/autotest_common.sh@10 -- # set +x 00:29:38.535 [2024-04-17 06:55:43.000777] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:29:38.535 [2024-04-17 06:55:43.000852] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113536 ] 00:29:38.535 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.535 [2024-04-17 06:55:43.068612] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.793 [2024-04-17 06:55:43.161070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:38.793 06:55:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:38.793 06:55:43 -- common/autotest_common.sh@850 -- # return 0 00:29:38.793 06:55:43 -- host/digest.sh@86 -- # false 00:29:38.793 06:55:43 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:38.793 06:55:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:39.051 06:55:43 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:39.051 06:55:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:39.617 nvme0n1 00:29:39.617 06:55:43 -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:39.617 06:55:43 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:39.617 Running I/O for 2 seconds... 00:29:42.144 00:29:42.144 Latency(us) 00:29:42.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.144 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:42.144 nvme0n1 : 2.05 18620.51 72.74 0.00 0.00 6757.07 2973.39 46797.56 00:29:42.144 =================================================================================================================== 00:29:42.144 Total : 18620.51 72.74 0.00 0.00 6757.07 2973.39 46797.56 00:29:42.144 0 00:29:42.144 06:55:46 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:42.144 06:55:46 -- host/digest.sh@93 -- # get_accel_stats 00:29:42.144 06:55:46 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:42.144 06:55:46 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:42.144 | select(.opcode=="crc32c") 00:29:42.144 | "\(.module_name) \(.executed)"' 00:29:42.144 06:55:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:42.144 06:55:46 -- host/digest.sh@94 -- # false 00:29:42.144 06:55:46 -- host/digest.sh@94 -- # exp_module=software 00:29:42.144 06:55:46 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:42.144 06:55:46 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:42.144 06:55:46 -- host/digest.sh@98 -- # killprocess 113536 00:29:42.144 06:55:46 -- common/autotest_common.sh@936 -- # '[' -z 113536 ']' 00:29:42.144 06:55:46 -- common/autotest_common.sh@940 -- # kill -0 113536 00:29:42.144 06:55:46 -- common/autotest_common.sh@941 -- # uname 00:29:42.144 06:55:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:42.144 06:55:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113536 00:29:42.144 06:55:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:42.144 06:55:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:42.144 06:55:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113536' 00:29:42.144 killing process with pid 113536 00:29:42.144 06:55:46 -- common/autotest_common.sh@955 -- # kill 113536 00:29:42.144 Received shutdown signal, test time was about 2.000000 seconds 00:29:42.144 00:29:42.144 Latency(us) 00:29:42.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.144 =================================================================================================================== 00:29:42.144 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:42.144 06:55:46 -- common/autotest_common.sh@960 -- # wait 113536 00:29:42.144 06:55:46 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:42.144 06:55:46 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:42.144 06:55:46 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:42.144 06:55:46 -- host/digest.sh@80 -- # rw=randread 00:29:42.144 06:55:46 -- host/digest.sh@80 -- # bs=131072 00:29:42.144 06:55:46 -- host/digest.sh@80 -- # qd=16 00:29:42.144 06:55:46 -- host/digest.sh@80 -- # scan_dsa=false 00:29:42.144 06:55:46 -- host/digest.sh@83 -- # bperfpid=113948 00:29:42.144 06:55:46 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:42.144 06:55:46 -- host/digest.sh@84 -- # waitforlisten 113948 /var/tmp/bperf.sock 00:29:42.144 06:55:46 -- common/autotest_common.sh@817 -- # '[' -z 113948 ']' 00:29:42.144 06:55:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:42.144 06:55:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:42.144 06:55:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:42.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:42.144 06:55:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:42.144 06:55:46 -- common/autotest_common.sh@10 -- # set +x 00:29:42.144 [2024-04-17 06:55:46.694431] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:29:42.144 [2024-04-17 06:55:46.694522] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113948 ] 00:29:42.144 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:42.144 Zero copy mechanism will not be used. 00:29:42.144 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.402 [2024-04-17 06:55:46.756955] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.402 [2024-04-17 06:55:46.843758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.402 06:55:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:42.402 06:55:46 -- common/autotest_common.sh@850 -- # return 0 00:29:42.402 06:55:46 -- host/digest.sh@86 -- # false 00:29:42.402 06:55:46 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:42.402 06:55:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:42.660 06:55:47 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:42.660 06:55:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:43.225 nvme0n1 00:29:43.225 06:55:47 -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:43.225 06:55:47 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:43.225 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:43.225 Zero copy mechanism will not be used. 00:29:43.225 Running I/O for 2 seconds... 00:29:45.756 00:29:45.756 Latency(us) 00:29:45.756 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:45.756 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:45.756 nvme0n1 : 2.01 2519.18 314.90 0.00 0.00 6345.50 1577.72 8980.86 00:29:45.756 =================================================================================================================== 00:29:45.756 Total : 2519.18 314.90 0.00 0.00 6345.50 1577.72 8980.86 00:29:45.756 0 00:29:45.756 06:55:49 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:45.756 06:55:49 -- host/digest.sh@93 -- # get_accel_stats 00:29:45.756 06:55:49 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:45.756 06:55:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:45.756 06:55:49 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:45.756 | select(.opcode=="crc32c") 00:29:45.756 | "\(.module_name) \(.executed)"' 00:29:45.756 06:55:50 -- host/digest.sh@94 -- # false 00:29:45.756 06:55:50 -- host/digest.sh@94 -- # exp_module=software 00:29:45.756 06:55:50 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:45.756 06:55:50 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:45.756 06:55:50 -- host/digest.sh@98 -- # killprocess 113948 00:29:45.756 06:55:50 -- common/autotest_common.sh@936 -- # '[' -z 113948 ']' 00:29:45.756 06:55:50 -- common/autotest_common.sh@940 -- # kill -0 113948 00:29:45.756 06:55:50 -- common/autotest_common.sh@941 -- # uname 00:29:45.756 06:55:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:45.756 06:55:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113948 00:29:45.756 06:55:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:45.756 06:55:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:45.756 06:55:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113948' 00:29:45.756 killing process with pid 113948 00:29:45.756 06:55:50 -- common/autotest_common.sh@955 -- # kill 113948 00:29:45.756 Received shutdown signal, test time was about 2.000000 seconds 00:29:45.756 00:29:45.756 Latency(us) 00:29:45.756 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:45.756 =================================================================================================================== 00:29:45.756 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:45.756 06:55:50 -- common/autotest_common.sh@960 -- # wait 113948 00:29:45.756 06:55:50 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:45.756 06:55:50 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:45.756 06:55:50 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:45.756 06:55:50 -- host/digest.sh@80 -- # rw=randwrite 00:29:45.756 06:55:50 -- host/digest.sh@80 -- # bs=4096 00:29:45.756 06:55:50 -- host/digest.sh@80 -- # qd=128 00:29:45.756 06:55:50 -- host/digest.sh@80 -- # scan_dsa=false 00:29:45.756 06:55:50 -- host/digest.sh@83 -- # bperfpid=114473 00:29:45.756 06:55:50 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:45.756 06:55:50 -- host/digest.sh@84 -- # waitforlisten 114473 /var/tmp/bperf.sock 00:29:45.756 06:55:50 -- common/autotest_common.sh@817 -- # '[' -z 114473 ']' 00:29:45.756 06:55:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:45.756 06:55:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:45.756 06:55:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:45.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:45.756 06:55:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:45.756 06:55:50 -- common/autotest_common.sh@10 -- # set +x 00:29:46.022 [2024-04-17 06:55:50.377545] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:29:46.022 [2024-04-17 06:55:50.377624] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114473 ] 00:29:46.022 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.022 [2024-04-17 06:55:50.437461] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.022 [2024-04-17 06:55:50.522145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:46.022 06:55:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:46.022 06:55:50 -- common/autotest_common.sh@850 -- # return 0 00:29:46.022 06:55:50 -- host/digest.sh@86 -- # false 00:29:46.022 06:55:50 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:46.022 06:55:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:46.588 06:55:50 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:46.588 06:55:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:46.846 nvme0n1 00:29:46.846 06:55:51 -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:46.846 06:55:51 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:46.846 Running I/O for 2 seconds... 00:29:49.374 00:29:49.374 Latency(us) 00:29:49.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:49.374 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:49.374 nvme0n1 : 2.01 19392.85 75.75 0.00 0.00 6589.91 3228.25 17961.72 00:29:49.374 =================================================================================================================== 00:29:49.374 Total : 19392.85 75.75 0.00 0.00 6589.91 3228.25 17961.72 00:29:49.374 0 00:29:49.374 06:55:53 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:49.374 06:55:53 -- host/digest.sh@93 -- # get_accel_stats 00:29:49.374 06:55:53 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:49.374 06:55:53 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:49.374 | select(.opcode=="crc32c") 00:29:49.374 | "\(.module_name) \(.executed)"' 00:29:49.374 06:55:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:49.374 06:55:53 -- host/digest.sh@94 -- # false 00:29:49.374 06:55:53 -- host/digest.sh@94 -- # exp_module=software 00:29:49.374 06:55:53 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:49.374 06:55:53 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:49.374 06:55:53 -- host/digest.sh@98 -- # killprocess 114473 00:29:49.374 06:55:53 -- common/autotest_common.sh@936 -- # '[' -z 114473 ']' 00:29:49.374 06:55:53 -- common/autotest_common.sh@940 -- # kill -0 114473 00:29:49.374 06:55:53 -- common/autotest_common.sh@941 -- # uname 00:29:49.374 06:55:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:49.374 06:55:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 114473 00:29:49.374 06:55:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:49.374 06:55:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:49.375 06:55:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 114473' 00:29:49.375 killing process with pid 114473 00:29:49.375 06:55:53 -- common/autotest_common.sh@955 -- # kill 114473 00:29:49.375 Received shutdown signal, test time was about 2.000000 seconds 00:29:49.375 00:29:49.375 Latency(us) 00:29:49.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:49.375 =================================================================================================================== 00:29:49.375 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:49.375 06:55:53 -- common/autotest_common.sh@960 -- # wait 114473 00:29:49.375 06:55:53 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:49.375 06:55:53 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:49.375 06:55:53 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:49.375 06:55:53 -- host/digest.sh@80 -- # rw=randwrite 00:29:49.375 06:55:53 -- host/digest.sh@80 -- # bs=131072 00:29:49.375 06:55:53 -- host/digest.sh@80 -- # qd=16 00:29:49.375 06:55:53 -- host/digest.sh@80 -- # scan_dsa=false 00:29:49.375 06:55:53 -- host/digest.sh@83 -- # bperfpid=114878 00:29:49.375 06:55:53 -- host/digest.sh@84 -- # waitforlisten 114878 /var/tmp/bperf.sock 00:29:49.375 06:55:53 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:49.375 06:55:53 -- common/autotest_common.sh@817 -- # '[' -z 114878 ']' 00:29:49.375 06:55:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:49.375 06:55:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:49.375 06:55:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:49.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:49.375 06:55:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:49.375 06:55:53 -- common/autotest_common.sh@10 -- # set +x 00:29:49.375 [2024-04-17 06:55:53.959288] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:29:49.375 [2024-04-17 06:55:53.959379] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114878 ] 00:29:49.375 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:49.375 Zero copy mechanism will not be used. 00:29:49.634 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.634 [2024-04-17 06:55:54.027255] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.634 [2024-04-17 06:55:54.120703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:49.634 06:55:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:49.634 06:55:54 -- common/autotest_common.sh@850 -- # return 0 00:29:49.634 06:55:54 -- host/digest.sh@86 -- # false 00:29:49.634 06:55:54 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:49.634 06:55:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:50.199 06:55:54 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:50.199 06:55:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:50.457 nvme0n1 00:29:50.457 06:55:55 -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:50.457 06:55:55 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:50.715 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:50.715 Zero copy mechanism will not be used. 00:29:50.715 Running I/O for 2 seconds... 00:29:52.614 00:29:52.614 Latency(us) 00:29:52.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.614 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:52.614 nvme0n1 : 2.01 2524.79 315.60 0.00 0.00 6322.17 4781.70 12913.02 00:29:52.614 =================================================================================================================== 00:29:52.615 Total : 2524.79 315.60 0.00 0.00 6322.17 4781.70 12913.02 00:29:52.615 0 00:29:52.615 06:55:57 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:52.615 06:55:57 -- host/digest.sh@93 -- # get_accel_stats 00:29:52.615 06:55:57 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:52.615 06:55:57 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:52.615 | select(.opcode=="crc32c") 00:29:52.615 | "\(.module_name) \(.executed)"' 00:29:52.615 06:55:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:52.872 06:55:57 -- host/digest.sh@94 -- # false 00:29:52.872 06:55:57 -- host/digest.sh@94 -- # exp_module=software 00:29:52.872 06:55:57 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:52.872 06:55:57 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:52.872 06:55:57 -- host/digest.sh@98 -- # killprocess 114878 00:29:52.872 06:55:57 -- common/autotest_common.sh@936 -- # '[' -z 114878 ']' 00:29:52.872 06:55:57 -- common/autotest_common.sh@940 -- # kill -0 114878 00:29:52.872 06:55:57 -- common/autotest_common.sh@941 -- # uname 00:29:52.872 06:55:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:52.872 06:55:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 114878 00:29:52.872 06:55:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:52.872 06:55:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:52.872 06:55:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 114878' 00:29:52.872 killing process with pid 114878 00:29:52.872 06:55:57 -- common/autotest_common.sh@955 -- # kill 114878 00:29:52.872 Received shutdown signal, test time was about 2.000000 seconds 00:29:52.872 00:29:52.872 Latency(us) 00:29:52.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.872 =================================================================================================================== 00:29:52.872 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:52.873 06:55:57 -- common/autotest_common.sh@960 -- # wait 114878 00:29:53.131 06:55:57 -- host/digest.sh@132 -- # killprocess 113512 00:29:53.131 06:55:57 -- common/autotest_common.sh@936 -- # '[' -z 113512 ']' 00:29:53.131 06:55:57 -- common/autotest_common.sh@940 -- # kill -0 113512 00:29:53.131 06:55:57 -- common/autotest_common.sh@941 -- # uname 00:29:53.131 06:55:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:53.131 06:55:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113512 00:29:53.131 06:55:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:53.131 06:55:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:53.131 06:55:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113512' 00:29:53.131 killing process with pid 113512 00:29:53.131 06:55:57 -- common/autotest_common.sh@955 -- # kill 113512 00:29:53.131 06:55:57 -- common/autotest_common.sh@960 -- # wait 113512 00:29:53.390 00:29:53.390 real 0m15.391s 00:29:53.390 user 0m29.084s 00:29:53.390 sys 0m4.281s 00:29:53.390 06:55:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:53.390 06:55:57 -- common/autotest_common.sh@10 -- # set +x 00:29:53.390 ************************************ 00:29:53.390 END TEST nvmf_digest_clean 00:29:53.390 ************************************ 00:29:53.390 06:55:57 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:53.390 06:55:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:53.390 06:55:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:53.390 06:55:57 -- common/autotest_common.sh@10 -- # set +x 00:29:53.649 ************************************ 00:29:53.649 START TEST nvmf_digest_error 00:29:53.649 ************************************ 00:29:53.649 06:55:58 -- common/autotest_common.sh@1111 -- # run_digest_error 00:29:53.649 06:55:58 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:53.649 06:55:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:53.649 06:55:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:53.649 06:55:58 -- common/autotest_common.sh@10 -- # set +x 00:29:53.649 06:55:58 -- nvmf/common.sh@470 -- # nvmfpid=115327 00:29:53.649 06:55:58 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:53.649 06:55:58 -- nvmf/common.sh@471 -- # waitforlisten 115327 00:29:53.649 06:55:58 -- common/autotest_common.sh@817 -- # '[' -z 115327 ']' 00:29:53.649 06:55:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.649 06:55:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:53.649 06:55:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.649 06:55:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:53.649 06:55:58 -- common/autotest_common.sh@10 -- # set +x 00:29:53.649 [2024-04-17 06:55:58.109281] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:29:53.649 [2024-04-17 06:55:58.109366] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:53.649 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.649 [2024-04-17 06:55:58.182507] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.907 [2024-04-17 06:55:58.272208] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:53.907 [2024-04-17 06:55:58.272262] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:53.907 [2024-04-17 06:55:58.272278] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:53.907 [2024-04-17 06:55:58.272292] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:53.907 [2024-04-17 06:55:58.272304] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:53.907 [2024-04-17 06:55:58.272342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.907 06:55:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:53.907 06:55:58 -- common/autotest_common.sh@850 -- # return 0 00:29:53.907 06:55:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:53.907 06:55:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:53.907 06:55:58 -- common/autotest_common.sh@10 -- # set +x 00:29:53.907 06:55:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:53.907 06:55:58 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:53.907 06:55:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:53.907 06:55:58 -- common/autotest_common.sh@10 -- # set +x 00:29:53.907 [2024-04-17 06:55:58.340911] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:53.907 06:55:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:53.907 06:55:58 -- host/digest.sh@105 -- # common_target_config 00:29:53.907 06:55:58 -- host/digest.sh@43 -- # rpc_cmd 00:29:53.907 06:55:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:53.907 06:55:58 -- common/autotest_common.sh@10 -- # set +x 00:29:53.907 null0 00:29:53.907 [2024-04-17 06:55:58.451732] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.907 [2024-04-17 06:55:58.475943] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.907 06:55:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:53.907 06:55:58 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:53.907 06:55:58 -- host/digest.sh@54 -- # local rw bs qd 00:29:53.907 06:55:58 -- host/digest.sh@56 -- # rw=randread 00:29:53.907 06:55:58 -- host/digest.sh@56 -- # bs=4096 00:29:53.907 06:55:58 -- host/digest.sh@56 -- # qd=128 00:29:53.907 06:55:58 -- host/digest.sh@58 -- # bperfpid=115468 00:29:53.907 06:55:58 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:53.907 06:55:58 -- host/digest.sh@60 -- # waitforlisten 115468 /var/tmp/bperf.sock 00:29:53.907 06:55:58 -- common/autotest_common.sh@817 -- # '[' -z 115468 ']' 00:29:53.907 06:55:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:53.907 06:55:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:53.907 06:55:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:53.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:53.907 06:55:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:53.907 06:55:58 -- common/autotest_common.sh@10 -- # set +x 00:29:54.165 [2024-04-17 06:55:58.522006] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:29:54.165 [2024-04-17 06:55:58.522080] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115468 ] 00:29:54.165 EAL: No free 2048 kB hugepages reported on node 1 00:29:54.165 [2024-04-17 06:55:58.583571] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.165 [2024-04-17 06:55:58.672181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:54.423 06:55:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:54.423 06:55:58 -- common/autotest_common.sh@850 -- # return 0 00:29:54.423 06:55:58 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:54.423 06:55:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:54.423 06:55:59 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:54.423 06:55:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:54.423 06:55:59 -- common/autotest_common.sh@10 -- # set +x 00:29:54.423 06:55:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:54.423 06:55:59 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:54.423 06:55:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:54.987 nvme0n1 00:29:54.987 06:55:59 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:54.987 06:55:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:54.987 06:55:59 -- common/autotest_common.sh@10 -- # set +x 00:29:54.987 06:55:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:54.988 06:55:59 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:54.988 06:55:59 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:54.988 Running I/O for 2 seconds... 00:29:54.988 [2024-04-17 06:55:59.584162] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:54.988 [2024-04-17 06:55:59.584223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.988 [2024-04-17 06:55:59.584258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.246 [2024-04-17 06:55:59.600231] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.246 [2024-04-17 06:55:59.600261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.246 [2024-04-17 06:55:59.600277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.246 [2024-04-17 06:55:59.614743] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.246 [2024-04-17 06:55:59.614779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.246 [2024-04-17 06:55:59.614798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.246 [2024-04-17 06:55:59.628871] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.246 [2024-04-17 06:55:59.628905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.246 [2024-04-17 06:55:59.628924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.246 [2024-04-17 06:55:59.642468] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.246 [2024-04-17 06:55:59.642502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.246 [2024-04-17 06:55:59.642521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.246 [2024-04-17 06:55:59.656831] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.246 [2024-04-17 06:55:59.656865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.246 [2024-04-17 06:55:59.656884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.246 [2024-04-17 06:55:59.669289] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.246 [2024-04-17 06:55:59.669335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.246 [2024-04-17 06:55:59.669351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.246 [2024-04-17 06:55:59.682511] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.246 [2024-04-17 06:55:59.682542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.246 [2024-04-17 06:55:59.682559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.246 [2024-04-17 06:55:59.696016] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.246 [2024-04-17 06:55:59.696050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.246 [2024-04-17 06:55:59.696069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.246 [2024-04-17 06:55:59.709315] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.246 [2024-04-17 06:55:59.709348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.246 [2024-04-17 06:55:59.709364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.246 [2024-04-17 06:55:59.723880] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.246 [2024-04-17 06:55:59.723914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.246 [2024-04-17 06:55:59.723932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.246 [2024-04-17 06:55:59.739630] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.246 [2024-04-17 06:55:59.739665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.246 [2024-04-17 06:55:59.739684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.246 [2024-04-17 06:55:59.751321] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.246 [2024-04-17 06:55:59.751349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.246 [2024-04-17 06:55:59.751364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.246 [2024-04-17 06:55:59.765685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.246 [2024-04-17 06:55:59.765719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.246 [2024-04-17 06:55:59.765738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.246 [2024-04-17 06:55:59.779345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.246 [2024-04-17 06:55:59.779378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.246 [2024-04-17 06:55:59.779397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.246 [2024-04-17 06:55:59.793013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.246 [2024-04-17 06:55:59.793046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.246 [2024-04-17 06:55:59.793064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.246 [2024-04-17 06:55:59.806718] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.246 [2024-04-17 06:55:59.806751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.246 [2024-04-17 06:55:59.806770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.246 [2024-04-17 06:55:59.821946] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.246 [2024-04-17 06:55:59.821976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.246 [2024-04-17 06:55:59.821992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.246 [2024-04-17 06:55:59.834543] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.246 [2024-04-17 06:55:59.834574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.246 [2024-04-17 06:55:59.834591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.246 [2024-04-17 06:55:59.847459] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.246 [2024-04-17 06:55:59.847494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.246 [2024-04-17 06:55:59.847510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.505 [2024-04-17 06:55:59.861637] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.505 [2024-04-17 06:55:59.861670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.505 [2024-04-17 06:55:59.861689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.505 [2024-04-17 06:55:59.878095] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.505 [2024-04-17 06:55:59.878128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.505 [2024-04-17 06:55:59.878147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.505 [2024-04-17 06:55:59.891473] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.505 [2024-04-17 06:55:59.891503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.505 [2024-04-17 06:55:59.891519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.505 [2024-04-17 06:55:59.903395] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.505 [2024-04-17 06:55:59.903422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.505 [2024-04-17 06:55:59.903437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.505 [2024-04-17 06:55:59.917855] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.505 [2024-04-17 06:55:59.917889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.505 [2024-04-17 06:55:59.917907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.505 [2024-04-17 06:55:59.931941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.505 [2024-04-17 06:55:59.931976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.505 [2024-04-17 06:55:59.931994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.505 [2024-04-17 06:55:59.946522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.505 [2024-04-17 06:55:59.946553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.505 [2024-04-17 06:55:59.946575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.505 [2024-04-17 06:55:59.959794] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.505 [2024-04-17 06:55:59.959827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.505 [2024-04-17 06:55:59.959845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.505 [2024-04-17 06:55:59.972681] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.505 [2024-04-17 06:55:59.972715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.505 [2024-04-17 06:55:59.972733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.505 [2024-04-17 06:55:59.987915] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.505 [2024-04-17 06:55:59.987948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.505 [2024-04-17 06:55:59.987966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.505 [2024-04-17 06:56:00.000918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.505 [2024-04-17 06:56:00.000951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.505 [2024-04-17 06:56:00.000969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.505 [2024-04-17 06:56:00.014986] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.505 [2024-04-17 06:56:00.015032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.505 [2024-04-17 06:56:00.015052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.505 [2024-04-17 06:56:00.028333] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.505 [2024-04-17 06:56:00.028364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.505 [2024-04-17 06:56:00.028381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.505 [2024-04-17 06:56:00.042540] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.505 [2024-04-17 06:56:00.042582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.505 [2024-04-17 06:56:00.042602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.506 [2024-04-17 06:56:00.058633] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.506 [2024-04-17 06:56:00.058672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.506 [2024-04-17 06:56:00.058691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.506 [2024-04-17 06:56:00.069802] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.506 [2024-04-17 06:56:00.069847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.506 [2024-04-17 06:56:00.069867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.506 [2024-04-17 06:56:00.085191] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.506 [2024-04-17 06:56:00.085239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.506 [2024-04-17 06:56:00.085256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.506 [2024-04-17 06:56:00.100152] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.506 [2024-04-17 06:56:00.100194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.506 [2024-04-17 06:56:00.100240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.763 [2024-04-17 06:56:00.115834] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.763 [2024-04-17 06:56:00.115867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.763 [2024-04-17 06:56:00.115885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.763 [2024-04-17 06:56:00.133494] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.763 [2024-04-17 06:56:00.133524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.763 [2024-04-17 06:56:00.133555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.763 [2024-04-17 06:56:00.147102] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.763 [2024-04-17 06:56:00.147132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.763 [2024-04-17 06:56:00.147150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.763 [2024-04-17 06:56:00.159711] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.764 [2024-04-17 06:56:00.159757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.764 [2024-04-17 06:56:00.159774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.764 [2024-04-17 06:56:00.171659] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.764 [2024-04-17 06:56:00.171689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.764 [2024-04-17 06:56:00.171706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.764 [2024-04-17 06:56:00.186111] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.764 [2024-04-17 06:56:00.186140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.764 [2024-04-17 06:56:00.186171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.764 [2024-04-17 06:56:00.198915] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.764 [2024-04-17 06:56:00.198945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.764 [2024-04-17 06:56:00.198962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.764 [2024-04-17 06:56:00.211358] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.764 [2024-04-17 06:56:00.211390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.764 [2024-04-17 06:56:00.211407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.764 [2024-04-17 06:56:00.223502] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.764 [2024-04-17 06:56:00.223534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.764 [2024-04-17 06:56:00.223551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.764 [2024-04-17 06:56:00.235963] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.764 [2024-04-17 06:56:00.235994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.764 [2024-04-17 06:56:00.236011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.764 [2024-04-17 06:56:00.251018] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.764 [2024-04-17 06:56:00.251048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.764 [2024-04-17 06:56:00.251078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.764 [2024-04-17 06:56:00.262062] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.764 [2024-04-17 06:56:00.262091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.764 [2024-04-17 06:56:00.262121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.764 [2024-04-17 06:56:00.275667] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.764 [2024-04-17 06:56:00.275712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.764 [2024-04-17 06:56:00.275727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.764 [2024-04-17 06:56:00.290834] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.764 [2024-04-17 06:56:00.290864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.764 [2024-04-17 06:56:00.290880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.764 [2024-04-17 06:56:00.301729] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.764 [2024-04-17 06:56:00.301759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.764 [2024-04-17 06:56:00.301782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.764 [2024-04-17 06:56:00.316185] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.764 [2024-04-17 06:56:00.316216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.764 [2024-04-17 06:56:00.316233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.764 [2024-04-17 06:56:00.330797] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.764 [2024-04-17 06:56:00.330828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.764 [2024-04-17 06:56:00.330853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.764 [2024-04-17 06:56:00.343122] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.764 [2024-04-17 06:56:00.343152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.764 [2024-04-17 06:56:00.343169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.764 [2024-04-17 06:56:00.356022] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.764 [2024-04-17 06:56:00.356050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.764 [2024-04-17 06:56:00.356080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.764 [2024-04-17 06:56:00.369964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:55.764 [2024-04-17 06:56:00.369994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.764 [2024-04-17 06:56:00.370024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.023 [2024-04-17 06:56:00.381112] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.023 [2024-04-17 06:56:00.381139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.023 [2024-04-17 06:56:00.381169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.023 [2024-04-17 06:56:00.394944] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.023 [2024-04-17 06:56:00.394975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.023 [2024-04-17 06:56:00.394991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.023 [2024-04-17 06:56:00.409862] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.023 [2024-04-17 06:56:00.409893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.023 [2024-04-17 06:56:00.409910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.023 [2024-04-17 06:56:00.421226] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.023 [2024-04-17 06:56:00.421255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.023 [2024-04-17 06:56:00.421270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.023 [2024-04-17 06:56:00.436096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.023 [2024-04-17 06:56:00.436136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.023 [2024-04-17 06:56:00.436153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.023 [2024-04-17 06:56:00.450410] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.023 [2024-04-17 06:56:00.450440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.023 [2024-04-17 06:56:00.450456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.023 [2024-04-17 06:56:00.461691] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.023 [2024-04-17 06:56:00.461721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.023 [2024-04-17 06:56:00.461737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.023 [2024-04-17 06:56:00.476686] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.023 [2024-04-17 06:56:00.476716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.023 [2024-04-17 06:56:00.476732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.023 [2024-04-17 06:56:00.490188] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.023 [2024-04-17 06:56:00.490218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.023 [2024-04-17 06:56:00.490243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.023 [2024-04-17 06:56:00.501495] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.023 [2024-04-17 06:56:00.501526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.023 [2024-04-17 06:56:00.501543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.023 [2024-04-17 06:56:00.514609] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.023 [2024-04-17 06:56:00.514639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.023 [2024-04-17 06:56:00.514655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.023 [2024-04-17 06:56:00.528536] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.023 [2024-04-17 06:56:00.528564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.023 [2024-04-17 06:56:00.528601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.023 [2024-04-17 06:56:00.541752] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.023 [2024-04-17 06:56:00.541782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.023 [2024-04-17 06:56:00.541798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.023 [2024-04-17 06:56:00.556695] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.023 [2024-04-17 06:56:00.556726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.023 [2024-04-17 06:56:00.556742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.023 [2024-04-17 06:56:00.568028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.023 [2024-04-17 06:56:00.568055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.023 [2024-04-17 06:56:00.568085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.023 [2024-04-17 06:56:00.581545] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.023 [2024-04-17 06:56:00.581575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.023 [2024-04-17 06:56:00.581592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.023 [2024-04-17 06:56:00.594887] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.023 [2024-04-17 06:56:00.594916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.023 [2024-04-17 06:56:00.594932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.023 [2024-04-17 06:56:00.605852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.023 [2024-04-17 06:56:00.605879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.023 [2024-04-17 06:56:00.605910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.023 [2024-04-17 06:56:00.620067] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.023 [2024-04-17 06:56:00.620111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.023 [2024-04-17 06:56:00.620128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.282 [2024-04-17 06:56:00.635727] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.282 [2024-04-17 06:56:00.635757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.282 [2024-04-17 06:56:00.635774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.282 [2024-04-17 06:56:00.648127] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.282 [2024-04-17 06:56:00.648165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.282 [2024-04-17 06:56:00.648192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.282 [2024-04-17 06:56:00.659638] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.282 [2024-04-17 06:56:00.659666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.282 [2024-04-17 06:56:00.659696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.282 [2024-04-17 06:56:00.674315] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.282 [2024-04-17 06:56:00.674344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.282 [2024-04-17 06:56:00.674360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.282 [2024-04-17 06:56:00.686931] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.282 [2024-04-17 06:56:00.686961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.282 [2024-04-17 06:56:00.686978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.282 [2024-04-17 06:56:00.699166] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.282 [2024-04-17 06:56:00.699216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.282 [2024-04-17 06:56:00.699233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.282 [2024-04-17 06:56:00.712275] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.282 [2024-04-17 06:56:00.712305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.282 [2024-04-17 06:56:00.712322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.282 [2024-04-17 06:56:00.724767] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.282 [2024-04-17 06:56:00.724797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.282 [2024-04-17 06:56:00.724813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.282 [2024-04-17 06:56:00.736776] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.282 [2024-04-17 06:56:00.736806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.282 [2024-04-17 06:56:00.736822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.282 [2024-04-17 06:56:00.751382] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.282 [2024-04-17 06:56:00.751412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.282 [2024-04-17 06:56:00.751428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.282 [2024-04-17 06:56:00.764514] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.282 [2024-04-17 06:56:00.764544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.282 [2024-04-17 06:56:00.764561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.282 [2024-04-17 06:56:00.776899] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.282 [2024-04-17 06:56:00.776929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.282 [2024-04-17 06:56:00.776946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.282 [2024-04-17 06:56:00.788493] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.282 [2024-04-17 06:56:00.788523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.282 [2024-04-17 06:56:00.788539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.282 [2024-04-17 06:56:00.801979] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.282 [2024-04-17 06:56:00.802008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.282 [2024-04-17 06:56:00.802024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.282 [2024-04-17 06:56:00.812723] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.282 [2024-04-17 06:56:00.812750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.282 [2024-04-17 06:56:00.812781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.282 [2024-04-17 06:56:00.827131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.282 [2024-04-17 06:56:00.827159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.282 [2024-04-17 06:56:00.827198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.282 [2024-04-17 06:56:00.839092] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.282 [2024-04-17 06:56:00.839121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.282 [2024-04-17 06:56:00.839138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.282 [2024-04-17 06:56:00.853216] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.282 [2024-04-17 06:56:00.853244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.282 [2024-04-17 06:56:00.853260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.282 [2024-04-17 06:56:00.864015] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.282 [2024-04-17 06:56:00.864042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.282 [2024-04-17 06:56:00.864080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.282 [2024-04-17 06:56:00.877347] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.282 [2024-04-17 06:56:00.877376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.282 [2024-04-17 06:56:00.877393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.541 [2024-04-17 06:56:00.892341] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.541 [2024-04-17 06:56:00.892371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.541 [2024-04-17 06:56:00.892388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.541 [2024-04-17 06:56:00.905031] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.541 [2024-04-17 06:56:00.905061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.541 [2024-04-17 06:56:00.905077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.541 [2024-04-17 06:56:00.917292] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.541 [2024-04-17 06:56:00.917322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.541 [2024-04-17 06:56:00.917338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.541 [2024-04-17 06:56:00.930791] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.541 [2024-04-17 06:56:00.930821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.541 [2024-04-17 06:56:00.930838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.541 [2024-04-17 06:56:00.941480] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.541 [2024-04-17 06:56:00.941507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.541 [2024-04-17 06:56:00.941523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.541 [2024-04-17 06:56:00.955944] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.541 [2024-04-17 06:56:00.955973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.541 [2024-04-17 06:56:00.956006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.541 [2024-04-17 06:56:00.969276] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.541 [2024-04-17 06:56:00.969305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.541 [2024-04-17 06:56:00.969322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.541 [2024-04-17 06:56:00.980943] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.541 [2024-04-17 06:56:00.980980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.541 [2024-04-17 06:56:00.980997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.541 [2024-04-17 06:56:00.993713] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.541 [2024-04-17 06:56:00.993742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.541 [2024-04-17 06:56:00.993758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.541 [2024-04-17 06:56:01.008362] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.541 [2024-04-17 06:56:01.008392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.541 [2024-04-17 06:56:01.008409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.541 [2024-04-17 06:56:01.019608] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.541 [2024-04-17 06:56:01.019638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.541 [2024-04-17 06:56:01.019654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.541 [2024-04-17 06:56:01.034305] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.541 [2024-04-17 06:56:01.034335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.541 [2024-04-17 06:56:01.034352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.541 [2024-04-17 06:56:01.046483] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.541 [2024-04-17 06:56:01.046512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.541 [2024-04-17 06:56:01.046543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.541 [2024-04-17 06:56:01.059623] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.541 [2024-04-17 06:56:01.059651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.541 [2024-04-17 06:56:01.059682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.541 [2024-04-17 06:56:01.073908] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.541 [2024-04-17 06:56:01.073938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.541 [2024-04-17 06:56:01.073954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.541 [2024-04-17 06:56:01.086327] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.541 [2024-04-17 06:56:01.086356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.541 [2024-04-17 06:56:01.086372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.541 [2024-04-17 06:56:01.098196] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.541 [2024-04-17 06:56:01.098224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.541 [2024-04-17 06:56:01.098239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.541 [2024-04-17 06:56:01.112405] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.541 [2024-04-17 06:56:01.112434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.541 [2024-04-17 06:56:01.112450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.541 [2024-04-17 06:56:01.125796] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.541 [2024-04-17 06:56:01.125825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.541 [2024-04-17 06:56:01.125842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.541 [2024-04-17 06:56:01.136870] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.541 [2024-04-17 06:56:01.136900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.541 [2024-04-17 06:56:01.136916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.799 [2024-04-17 06:56:01.150144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.799 [2024-04-17 06:56:01.150172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.799 [2024-04-17 06:56:01.150210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.799 [2024-04-17 06:56:01.163540] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.799 [2024-04-17 06:56:01.163570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.799 [2024-04-17 06:56:01.163587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.799 [2024-04-17 06:56:01.175186] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.799 [2024-04-17 06:56:01.175230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.799 [2024-04-17 06:56:01.175247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.799 [2024-04-17 06:56:01.190215] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.799 [2024-04-17 06:56:01.190242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.799 [2024-04-17 06:56:01.190273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.799 [2024-04-17 06:56:01.203781] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.799 [2024-04-17 06:56:01.203815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.799 [2024-04-17 06:56:01.203841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.799 [2024-04-17 06:56:01.219259] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.799 [2024-04-17 06:56:01.219291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.799 [2024-04-17 06:56:01.219308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.799 [2024-04-17 06:56:01.231386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.799 [2024-04-17 06:56:01.231417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.799 [2024-04-17 06:56:01.231433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.799 [2024-04-17 06:56:01.246597] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.799 [2024-04-17 06:56:01.246627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.799 [2024-04-17 06:56:01.246643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.799 [2024-04-17 06:56:01.260201] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.799 [2024-04-17 06:56:01.260253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.799 [2024-04-17 06:56:01.260270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.799 [2024-04-17 06:56:01.274045] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.799 [2024-04-17 06:56:01.274077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.799 [2024-04-17 06:56:01.274096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.799 [2024-04-17 06:56:01.285933] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.799 [2024-04-17 06:56:01.285965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.799 [2024-04-17 06:56:01.285983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.799 [2024-04-17 06:56:01.301413] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.799 [2024-04-17 06:56:01.301442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.799 [2024-04-17 06:56:01.301476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.799 [2024-04-17 06:56:01.315246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.799 [2024-04-17 06:56:01.315277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.799 [2024-04-17 06:56:01.315293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.799 [2024-04-17 06:56:01.329352] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.799 [2024-04-17 06:56:01.329380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.799 [2024-04-17 06:56:01.329396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.799 [2024-04-17 06:56:01.342798] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.799 [2024-04-17 06:56:01.342831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.799 [2024-04-17 06:56:01.342849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.799 [2024-04-17 06:56:01.357738] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.799 [2024-04-17 06:56:01.357772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.799 [2024-04-17 06:56:01.357791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.799 [2024-04-17 06:56:01.369385] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.799 [2024-04-17 06:56:01.369415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.799 [2024-04-17 06:56:01.369432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.799 [2024-04-17 06:56:01.383433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.799 [2024-04-17 06:56:01.383471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.799 [2024-04-17 06:56:01.383488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.799 [2024-04-17 06:56:01.399418] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:56.799 [2024-04-17 06:56:01.399447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.799 [2024-04-17 06:56:01.399463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.056 [2024-04-17 06:56:01.411742] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:57.056 [2024-04-17 06:56:01.411774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.056 [2024-04-17 06:56:01.411792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.056 [2024-04-17 06:56:01.425685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:57.056 [2024-04-17 06:56:01.425718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.056 [2024-04-17 06:56:01.425736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.056 [2024-04-17 06:56:01.442154] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:57.056 [2024-04-17 06:56:01.442195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.057 [2024-04-17 06:56:01.442234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.057 [2024-04-17 06:56:01.458294] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:57.057 [2024-04-17 06:56:01.458324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.057 [2024-04-17 06:56:01.458340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.057 [2024-04-17 06:56:01.471936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:57.057 [2024-04-17 06:56:01.471970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.057 [2024-04-17 06:56:01.471988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.057 [2024-04-17 06:56:01.485116] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:57.057 [2024-04-17 06:56:01.485150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.057 [2024-04-17 06:56:01.485167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.057 [2024-04-17 06:56:01.498956] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:57.057 [2024-04-17 06:56:01.498990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.057 [2024-04-17 06:56:01.499008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.057 [2024-04-17 06:56:01.514192] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:57.057 [2024-04-17 06:56:01.514239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.057 [2024-04-17 06:56:01.514256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.057 [2024-04-17 06:56:01.528220] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:57.057 [2024-04-17 06:56:01.528251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.057 [2024-04-17 06:56:01.528268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.057 [2024-04-17 06:56:01.540441] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:57.057 [2024-04-17 06:56:01.540487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.057 [2024-04-17 06:56:01.540506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.057 [2024-04-17 06:56:01.554592] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:57.057 [2024-04-17 06:56:01.554626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.057 [2024-04-17 06:56:01.554644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.057 [2024-04-17 06:56:01.567870] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb94f0) 00:29:57.057 [2024-04-17 06:56:01.567905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.057 [2024-04-17 06:56:01.567922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.057 00:29:57.057 Latency(us) 00:29:57.057 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:57.057 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:57.057 nvme0n1 : 2.00 18874.87 73.73 0.00 0.00 6773.51 3422.44 18350.08 00:29:57.057 =================================================================================================================== 00:29:57.057 Total : 18874.87 73.73 0.00 0.00 6773.51 3422.44 18350.08 00:29:57.057 0 00:29:57.057 06:56:01 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:57.057 06:56:01 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:57.057 06:56:01 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:57.057 06:56:01 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:57.057 | .driver_specific 00:29:57.057 | .nvme_error 00:29:57.057 | .status_code 00:29:57.057 | .command_transient_transport_error' 00:29:57.314 06:56:01 -- host/digest.sh@71 -- # (( 148 > 0 )) 00:29:57.314 06:56:01 -- host/digest.sh@73 -- # killprocess 115468 00:29:57.314 06:56:01 -- common/autotest_common.sh@936 -- # '[' -z 115468 ']' 00:29:57.314 06:56:01 -- common/autotest_common.sh@940 -- # kill -0 115468 00:29:57.314 06:56:01 -- common/autotest_common.sh@941 -- # uname 00:29:57.314 06:56:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:57.314 06:56:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 115468 00:29:57.314 06:56:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:57.314 06:56:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:57.314 06:56:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 115468' 00:29:57.314 killing process with pid 115468 00:29:57.314 06:56:01 -- common/autotest_common.sh@955 -- # kill 115468 00:29:57.314 Received shutdown signal, test time was about 2.000000 seconds 00:29:57.314 00:29:57.314 Latency(us) 00:29:57.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:57.314 =================================================================================================================== 00:29:57.314 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:57.314 06:56:01 -- common/autotest_common.sh@960 -- # wait 115468 00:29:57.572 06:56:02 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:57.572 06:56:02 -- host/digest.sh@54 -- # local rw bs qd 00:29:57.572 06:56:02 -- host/digest.sh@56 -- # rw=randread 00:29:57.572 06:56:02 -- host/digest.sh@56 -- # bs=131072 00:29:57.572 06:56:02 -- host/digest.sh@56 -- # qd=16 00:29:57.572 06:56:02 -- host/digest.sh@58 -- # bperfpid=115868 00:29:57.572 06:56:02 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:57.572 06:56:02 -- host/digest.sh@60 -- # waitforlisten 115868 /var/tmp/bperf.sock 00:29:57.572 06:56:02 -- common/autotest_common.sh@817 -- # '[' -z 115868 ']' 00:29:57.572 06:56:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:57.572 06:56:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:57.572 06:56:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:57.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:57.572 06:56:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:57.572 06:56:02 -- common/autotest_common.sh@10 -- # set +x 00:29:57.572 [2024-04-17 06:56:02.140671] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:29:57.572 [2024-04-17 06:56:02.140752] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115868 ] 00:29:57.572 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:57.572 Zero copy mechanism will not be used. 00:29:57.572 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.830 [2024-04-17 06:56:02.202248] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.830 [2024-04-17 06:56:02.291614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.830 06:56:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:57.830 06:56:02 -- common/autotest_common.sh@850 -- # return 0 00:29:57.830 06:56:02 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:57.830 06:56:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:58.087 06:56:02 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:58.087 06:56:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:58.087 06:56:02 -- common/autotest_common.sh@10 -- # set +x 00:29:58.087 06:56:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:58.087 06:56:02 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:58.087 06:56:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:58.653 nvme0n1 00:29:58.653 06:56:03 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:58.653 06:56:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:58.653 06:56:03 -- common/autotest_common.sh@10 -- # set +x 00:29:58.653 06:56:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:58.653 06:56:03 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:58.653 06:56:03 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:58.653 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:58.653 Zero copy mechanism will not be used. 00:29:58.653 Running I/O for 2 seconds... 00:29:58.653 [2024-04-17 06:56:03.120901] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.653 [2024-04-17 06:56:03.120954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.653 [2024-04-17 06:56:03.120985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.653 [2024-04-17 06:56:03.130852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.653 [2024-04-17 06:56:03.130890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.653 [2024-04-17 06:56:03.130921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.653 [2024-04-17 06:56:03.140727] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.653 [2024-04-17 06:56:03.140763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.653 [2024-04-17 06:56:03.140792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.653 [2024-04-17 06:56:03.150604] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.653 [2024-04-17 06:56:03.150640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.654 [2024-04-17 06:56:03.150671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.654 [2024-04-17 06:56:03.160489] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.654 [2024-04-17 06:56:03.160529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.654 [2024-04-17 06:56:03.160560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.654 [2024-04-17 06:56:03.170562] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.654 [2024-04-17 06:56:03.170607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.654 [2024-04-17 06:56:03.170638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.654 [2024-04-17 06:56:03.180509] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.654 [2024-04-17 06:56:03.180545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.654 [2024-04-17 06:56:03.180575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.654 [2024-04-17 06:56:03.190433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.654 [2024-04-17 06:56:03.190482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.654 [2024-04-17 06:56:03.190513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.654 [2024-04-17 06:56:03.200541] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.654 [2024-04-17 06:56:03.200575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.654 [2024-04-17 06:56:03.200606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.654 [2024-04-17 06:56:03.210368] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.654 [2024-04-17 06:56:03.210400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.654 [2024-04-17 06:56:03.210428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.654 [2024-04-17 06:56:03.220278] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.654 [2024-04-17 06:56:03.220307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.654 [2024-04-17 06:56:03.220348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.654 [2024-04-17 06:56:03.230403] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.654 [2024-04-17 06:56:03.230435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.654 [2024-04-17 06:56:03.230463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.654 [2024-04-17 06:56:03.240575] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.654 [2024-04-17 06:56:03.240611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.654 [2024-04-17 06:56:03.240648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.654 [2024-04-17 06:56:03.250560] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.654 [2024-04-17 06:56:03.250595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.654 [2024-04-17 06:56:03.250625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.654 [2024-04-17 06:56:03.260558] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.654 [2024-04-17 06:56:03.260589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.654 [2024-04-17 06:56:03.260616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.913 [2024-04-17 06:56:03.270731] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.913 [2024-04-17 06:56:03.270765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.913 [2024-04-17 06:56:03.270792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.913 [2024-04-17 06:56:03.280770] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.913 [2024-04-17 06:56:03.280805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.913 [2024-04-17 06:56:03.280835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.913 [2024-04-17 06:56:03.290775] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.914 [2024-04-17 06:56:03.290810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.914 [2024-04-17 06:56:03.290840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.914 [2024-04-17 06:56:03.300697] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.914 [2024-04-17 06:56:03.300732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.914 [2024-04-17 06:56:03.300762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.914 [2024-04-17 06:56:03.310616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.914 [2024-04-17 06:56:03.310651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.914 [2024-04-17 06:56:03.310682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.914 [2024-04-17 06:56:03.320597] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.914 [2024-04-17 06:56:03.320632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.914 [2024-04-17 06:56:03.320662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.914 [2024-04-17 06:56:03.330796] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.914 [2024-04-17 06:56:03.330839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.914 [2024-04-17 06:56:03.330870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.914 [2024-04-17 06:56:03.340974] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.914 [2024-04-17 06:56:03.341010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.914 [2024-04-17 06:56:03.341040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.914 [2024-04-17 06:56:03.351147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.914 [2024-04-17 06:56:03.351189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.914 [2024-04-17 06:56:03.351222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.914 [2024-04-17 06:56:03.361162] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.914 [2024-04-17 06:56:03.361218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.914 [2024-04-17 06:56:03.361262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.914 [2024-04-17 06:56:03.371359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.914 [2024-04-17 06:56:03.371388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.914 [2024-04-17 06:56:03.371413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.914 [2024-04-17 06:56:03.381254] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.914 [2024-04-17 06:56:03.381286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.914 [2024-04-17 06:56:03.381312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.914 [2024-04-17 06:56:03.391356] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.914 [2024-04-17 06:56:03.391389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.914 [2024-04-17 06:56:03.391415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.914 [2024-04-17 06:56:03.401327] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.914 [2024-04-17 06:56:03.401360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.914 [2024-04-17 06:56:03.401386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.914 [2024-04-17 06:56:03.411504] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.914 [2024-04-17 06:56:03.411542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.914 [2024-04-17 06:56:03.411572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.914 [2024-04-17 06:56:03.421385] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.914 [2024-04-17 06:56:03.421415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.914 [2024-04-17 06:56:03.421441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.914 [2024-04-17 06:56:03.431482] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.914 [2024-04-17 06:56:03.431518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.914 [2024-04-17 06:56:03.431550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.914 [2024-04-17 06:56:03.441371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.914 [2024-04-17 06:56:03.441401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.914 [2024-04-17 06:56:03.441426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.914 [2024-04-17 06:56:03.451280] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.914 [2024-04-17 06:56:03.451310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.914 [2024-04-17 06:56:03.451335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.914 [2024-04-17 06:56:03.461433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.914 [2024-04-17 06:56:03.461476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.914 [2024-04-17 06:56:03.461499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.914 [2024-04-17 06:56:03.471491] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.914 [2024-04-17 06:56:03.471526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.914 [2024-04-17 06:56:03.471556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.914 [2024-04-17 06:56:03.481461] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.914 [2024-04-17 06:56:03.481508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.914 [2024-04-17 06:56:03.481538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.914 [2024-04-17 06:56:03.491220] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.914 [2024-04-17 06:56:03.491266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.914 [2024-04-17 06:56:03.491291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.914 [2024-04-17 06:56:03.501225] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.914 [2024-04-17 06:56:03.501276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.914 [2024-04-17 06:56:03.501302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.914 [2024-04-17 06:56:03.511267] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:58.914 [2024-04-17 06:56:03.511297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.914 [2024-04-17 06:56:03.511322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.173 [2024-04-17 06:56:03.521500] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.173 [2024-04-17 06:56:03.521536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.173 [2024-04-17 06:56:03.521566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.173 [2024-04-17 06:56:03.531550] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.173 [2024-04-17 06:56:03.531597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.173 [2024-04-17 06:56:03.531623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.173 [2024-04-17 06:56:03.541496] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.173 [2024-04-17 06:56:03.541545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.173 [2024-04-17 06:56:03.541575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.173 [2024-04-17 06:56:03.551490] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.173 [2024-04-17 06:56:03.551524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.173 [2024-04-17 06:56:03.551554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.173 [2024-04-17 06:56:03.561550] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.173 [2024-04-17 06:56:03.561585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.173 [2024-04-17 06:56:03.561614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.173 [2024-04-17 06:56:03.571586] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.173 [2024-04-17 06:56:03.571621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.173 [2024-04-17 06:56:03.571651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.173 [2024-04-17 06:56:03.582160] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.173 [2024-04-17 06:56:03.582204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.173 [2024-04-17 06:56:03.582252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.173 [2024-04-17 06:56:03.592305] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.173 [2024-04-17 06:56:03.592335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.173 [2024-04-17 06:56:03.592360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.173 [2024-04-17 06:56:03.602293] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.173 [2024-04-17 06:56:03.602324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.173 [2024-04-17 06:56:03.602349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.173 [2024-04-17 06:56:03.612325] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.173 [2024-04-17 06:56:03.612355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.173 [2024-04-17 06:56:03.612380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.173 [2024-04-17 06:56:03.622512] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.173 [2024-04-17 06:56:03.622547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.173 [2024-04-17 06:56:03.622577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.173 [2024-04-17 06:56:03.632234] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.173 [2024-04-17 06:56:03.632282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.173 [2024-04-17 06:56:03.632309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.173 [2024-04-17 06:56:03.642189] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.173 [2024-04-17 06:56:03.642237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.173 [2024-04-17 06:56:03.642264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.173 [2024-04-17 06:56:03.652051] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.173 [2024-04-17 06:56:03.652087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.173 [2024-04-17 06:56:03.652117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.173 [2024-04-17 06:56:03.662162] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.173 [2024-04-17 06:56:03.662205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.173 [2024-04-17 06:56:03.662254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.173 [2024-04-17 06:56:03.672266] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.173 [2024-04-17 06:56:03.672296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.173 [2024-04-17 06:56:03.672327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.173 [2024-04-17 06:56:03.682355] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.173 [2024-04-17 06:56:03.682385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.173 [2024-04-17 06:56:03.682410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.173 [2024-04-17 06:56:03.692287] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.173 [2024-04-17 06:56:03.692331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.173 [2024-04-17 06:56:03.692357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.173 [2024-04-17 06:56:03.702299] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.173 [2024-04-17 06:56:03.702329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.173 [2024-04-17 06:56:03.702353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.173 [2024-04-17 06:56:03.712320] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.173 [2024-04-17 06:56:03.712349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.173 [2024-04-17 06:56:03.712375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.174 [2024-04-17 06:56:03.722433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.174 [2024-04-17 06:56:03.722464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.174 [2024-04-17 06:56:03.722490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.174 [2024-04-17 06:56:03.732280] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.174 [2024-04-17 06:56:03.732310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.174 [2024-04-17 06:56:03.732335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.174 [2024-04-17 06:56:03.742166] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.174 [2024-04-17 06:56:03.742210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.174 [2024-04-17 06:56:03.742255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.174 [2024-04-17 06:56:03.752293] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.174 [2024-04-17 06:56:03.752325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.174 [2024-04-17 06:56:03.752352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.174 [2024-04-17 06:56:03.762074] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.174 [2024-04-17 06:56:03.762117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.174 [2024-04-17 06:56:03.762148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.174 [2024-04-17 06:56:03.772104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.174 [2024-04-17 06:56:03.772139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.174 [2024-04-17 06:56:03.772170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.432 [2024-04-17 06:56:03.782283] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.432 [2024-04-17 06:56:03.782315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.432 [2024-04-17 06:56:03.782342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.432 [2024-04-17 06:56:03.792138] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.432 [2024-04-17 06:56:03.792173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.432 [2024-04-17 06:56:03.792230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.432 [2024-04-17 06:56:03.802216] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.432 [2024-04-17 06:56:03.802262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.432 [2024-04-17 06:56:03.802287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.432 [2024-04-17 06:56:03.812257] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.432 [2024-04-17 06:56:03.812302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.432 [2024-04-17 06:56:03.812327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.432 [2024-04-17 06:56:03.822365] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.432 [2024-04-17 06:56:03.822395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.432 [2024-04-17 06:56:03.822420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.432 [2024-04-17 06:56:03.832272] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.432 [2024-04-17 06:56:03.832301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.432 [2024-04-17 06:56:03.832326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.432 [2024-04-17 06:56:03.842138] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.432 [2024-04-17 06:56:03.842174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.432 [2024-04-17 06:56:03.842215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.432 [2024-04-17 06:56:03.852307] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.432 [2024-04-17 06:56:03.852337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.432 [2024-04-17 06:56:03.852362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.432 [2024-04-17 06:56:03.862225] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.433 [2024-04-17 06:56:03.862270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-04-17 06:56:03.862295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.433 [2024-04-17 06:56:03.872268] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.433 [2024-04-17 06:56:03.872312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-04-17 06:56:03.872338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.433 [2024-04-17 06:56:03.882297] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.433 [2024-04-17 06:56:03.882341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-04-17 06:56:03.882366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.433 [2024-04-17 06:56:03.892123] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.433 [2024-04-17 06:56:03.892160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-04-17 06:56:03.892203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.433 [2024-04-17 06:56:03.902325] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.433 [2024-04-17 06:56:03.902357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-04-17 06:56:03.902383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.433 [2024-04-17 06:56:03.912332] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.433 [2024-04-17 06:56:03.912364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-04-17 06:56:03.912391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.433 [2024-04-17 06:56:03.922351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.433 [2024-04-17 06:56:03.922382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-04-17 06:56:03.922409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.433 [2024-04-17 06:56:03.932351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.433 [2024-04-17 06:56:03.932402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-04-17 06:56:03.932445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.433 [2024-04-17 06:56:03.942337] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.433 [2024-04-17 06:56:03.942367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-04-17 06:56:03.942392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.433 [2024-04-17 06:56:03.952487] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.433 [2024-04-17 06:56:03.952521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-04-17 06:56:03.952550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.433 [2024-04-17 06:56:03.962454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.433 [2024-04-17 06:56:03.962501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-04-17 06:56:03.962532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.433 [2024-04-17 06:56:03.972317] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.433 [2024-04-17 06:56:03.972346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-04-17 06:56:03.972371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.433 [2024-04-17 06:56:03.982243] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.433 [2024-04-17 06:56:03.982273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-04-17 06:56:03.982298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.433 [2024-04-17 06:56:03.992259] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.433 [2024-04-17 06:56:03.992305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-04-17 06:56:03.992329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.433 [2024-04-17 06:56:04.002271] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.433 [2024-04-17 06:56:04.002301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-04-17 06:56:04.002326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.433 [2024-04-17 06:56:04.012289] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.433 [2024-04-17 06:56:04.012333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-04-17 06:56:04.012359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.433 [2024-04-17 06:56:04.022312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.433 [2024-04-17 06:56:04.022342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-04-17 06:56:04.022367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.433 [2024-04-17 06:56:04.032256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.433 [2024-04-17 06:56:04.032300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-04-17 06:56:04.032325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.692 [2024-04-17 06:56:04.042313] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.692 [2024-04-17 06:56:04.042358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.692 [2024-04-17 06:56:04.042383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.692 [2024-04-17 06:56:04.052514] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.692 [2024-04-17 06:56:04.052549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.692 [2024-04-17 06:56:04.052579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.692 [2024-04-17 06:56:04.062481] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.692 [2024-04-17 06:56:04.062517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.692 [2024-04-17 06:56:04.062547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.692 [2024-04-17 06:56:04.072368] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.692 [2024-04-17 06:56:04.072398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.692 [2024-04-17 06:56:04.072423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.692 [2024-04-17 06:56:04.082368] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.692 [2024-04-17 06:56:04.082414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.692 [2024-04-17 06:56:04.082440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.692 [2024-04-17 06:56:04.092186] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.692 [2024-04-17 06:56:04.092221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.692 [2024-04-17 06:56:04.092261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.692 [2024-04-17 06:56:04.102184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.692 [2024-04-17 06:56:04.102213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.692 [2024-04-17 06:56:04.102246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.692 [2024-04-17 06:56:04.112042] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.692 [2024-04-17 06:56:04.112077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.692 [2024-04-17 06:56:04.112106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.692 [2024-04-17 06:56:04.122056] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.692 [2024-04-17 06:56:04.122090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.692 [2024-04-17 06:56:04.122120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.692 [2024-04-17 06:56:04.132063] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.692 [2024-04-17 06:56:04.132097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.692 [2024-04-17 06:56:04.132127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.692 [2024-04-17 06:56:04.142059] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.692 [2024-04-17 06:56:04.142093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.692 [2024-04-17 06:56:04.142121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.692 [2024-04-17 06:56:04.151897] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.692 [2024-04-17 06:56:04.151929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.692 [2024-04-17 06:56:04.151969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.692 [2024-04-17 06:56:04.161788] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.692 [2024-04-17 06:56:04.161824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.692 [2024-04-17 06:56:04.161854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.692 [2024-04-17 06:56:04.172095] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.692 [2024-04-17 06:56:04.172141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.692 [2024-04-17 06:56:04.172188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.692 [2024-04-17 06:56:04.182051] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.692 [2024-04-17 06:56:04.182087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.692 [2024-04-17 06:56:04.182117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.692 [2024-04-17 06:56:04.192075] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.692 [2024-04-17 06:56:04.192117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.692 [2024-04-17 06:56:04.192147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.692 [2024-04-17 06:56:04.202228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.692 [2024-04-17 06:56:04.202273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.692 [2024-04-17 06:56:04.202299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.692 [2024-04-17 06:56:04.212140] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.692 [2024-04-17 06:56:04.212186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.692 [2024-04-17 06:56:04.212229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.692 [2024-04-17 06:56:04.222031] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.692 [2024-04-17 06:56:04.222067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.693 [2024-04-17 06:56:04.222099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.693 [2024-04-17 06:56:04.232071] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.693 [2024-04-17 06:56:04.232107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.693 [2024-04-17 06:56:04.232137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.693 [2024-04-17 06:56:04.242083] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.693 [2024-04-17 06:56:04.242119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.693 [2024-04-17 06:56:04.242149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.693 [2024-04-17 06:56:04.252025] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.693 [2024-04-17 06:56:04.252061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.693 [2024-04-17 06:56:04.252090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.693 [2024-04-17 06:56:04.262047] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.693 [2024-04-17 06:56:04.262082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.693 [2024-04-17 06:56:04.262112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.693 [2024-04-17 06:56:04.272054] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.693 [2024-04-17 06:56:04.272088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.693 [2024-04-17 06:56:04.272124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.693 [2024-04-17 06:56:04.282142] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.693 [2024-04-17 06:56:04.282187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.693 [2024-04-17 06:56:04.282231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.693 [2024-04-17 06:56:04.292263] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.693 [2024-04-17 06:56:04.292293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.693 [2024-04-17 06:56:04.292318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.951 [2024-04-17 06:56:04.302257] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.951 [2024-04-17 06:56:04.302290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.951 [2024-04-17 06:56:04.302316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.951 [2024-04-17 06:56:04.312501] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.951 [2024-04-17 06:56:04.312543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.951 [2024-04-17 06:56:04.312571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.951 [2024-04-17 06:56:04.322386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.951 [2024-04-17 06:56:04.322419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.951 [2024-04-17 06:56:04.322435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.951 [2024-04-17 06:56:04.332367] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.951 [2024-04-17 06:56:04.332395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.951 [2024-04-17 06:56:04.332411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.951 [2024-04-17 06:56:04.342279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.951 [2024-04-17 06:56:04.342307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.951 [2024-04-17 06:56:04.342322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.951 [2024-04-17 06:56:04.352321] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.951 [2024-04-17 06:56:04.352348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.951 [2024-04-17 06:56:04.352363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.951 [2024-04-17 06:56:04.362477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.951 [2024-04-17 06:56:04.362529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.951 [2024-04-17 06:56:04.362549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.951 [2024-04-17 06:56:04.372612] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.951 [2024-04-17 06:56:04.372645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.951 [2024-04-17 06:56:04.372662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.951 [2024-04-17 06:56:04.382505] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.951 [2024-04-17 06:56:04.382538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.951 [2024-04-17 06:56:04.382556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.951 [2024-04-17 06:56:04.392416] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.951 [2024-04-17 06:56:04.392443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.951 [2024-04-17 06:56:04.392459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.951 [2024-04-17 06:56:04.402147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.951 [2024-04-17 06:56:04.402190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.951 [2024-04-17 06:56:04.402211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.951 [2024-04-17 06:56:04.413351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.951 [2024-04-17 06:56:04.413381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.951 [2024-04-17 06:56:04.413396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.951 [2024-04-17 06:56:04.423172] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.951 [2024-04-17 06:56:04.423211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.951 [2024-04-17 06:56:04.423242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.951 [2024-04-17 06:56:04.433231] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.951 [2024-04-17 06:56:04.433259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.951 [2024-04-17 06:56:04.433274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.951 [2024-04-17 06:56:04.443183] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.951 [2024-04-17 06:56:04.443211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.951 [2024-04-17 06:56:04.443241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.951 [2024-04-17 06:56:04.453143] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.951 [2024-04-17 06:56:04.453183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.951 [2024-04-17 06:56:04.453203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.951 [2024-04-17 06:56:04.463298] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.951 [2024-04-17 06:56:04.463325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.951 [2024-04-17 06:56:04.463340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.951 [2024-04-17 06:56:04.473281] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.951 [2024-04-17 06:56:04.473309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.951 [2024-04-17 06:56:04.473325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.951 [2024-04-17 06:56:04.483248] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.951 [2024-04-17 06:56:04.483276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.951 [2024-04-17 06:56:04.483291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.951 [2024-04-17 06:56:04.493050] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.951 [2024-04-17 06:56:04.493081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.951 [2024-04-17 06:56:04.493099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.951 [2024-04-17 06:56:04.503087] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.951 [2024-04-17 06:56:04.503116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.951 [2024-04-17 06:56:04.503146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.951 [2024-04-17 06:56:04.513073] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.951 [2024-04-17 06:56:04.513116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.951 [2024-04-17 06:56:04.513132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.951 [2024-04-17 06:56:04.523097] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.951 [2024-04-17 06:56:04.523129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.951 [2024-04-17 06:56:04.523147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.951 [2024-04-17 06:56:04.533171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.951 [2024-04-17 06:56:04.533210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.951 [2024-04-17 06:56:04.533247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.951 [2024-04-17 06:56:04.543098] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.951 [2024-04-17 06:56:04.543129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.951 [2024-04-17 06:56:04.543147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.952 [2024-04-17 06:56:04.553057] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:29:59.952 [2024-04-17 06:56:04.553089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.952 [2024-04-17 06:56:04.553107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.210 [2024-04-17 06:56:04.563129] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.210 [2024-04-17 06:56:04.563162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.210 [2024-04-17 06:56:04.563189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.210 [2024-04-17 06:56:04.573368] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.210 [2024-04-17 06:56:04.573396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.210 [2024-04-17 06:56:04.573411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.210 [2024-04-17 06:56:04.583263] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.210 [2024-04-17 06:56:04.583290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.210 [2024-04-17 06:56:04.583305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.210 [2024-04-17 06:56:04.593226] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.210 [2024-04-17 06:56:04.593252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.210 [2024-04-17 06:56:04.593268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.210 [2024-04-17 06:56:04.603145] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.210 [2024-04-17 06:56:04.603186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.210 [2024-04-17 06:56:04.603221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.210 [2024-04-17 06:56:04.613279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.210 [2024-04-17 06:56:04.613306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.210 [2024-04-17 06:56:04.613321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.210 [2024-04-17 06:56:04.623214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.210 [2024-04-17 06:56:04.623262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.210 [2024-04-17 06:56:04.623278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.210 [2024-04-17 06:56:04.633258] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.210 [2024-04-17 06:56:04.633284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.210 [2024-04-17 06:56:04.633299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.210 [2024-04-17 06:56:04.643304] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.210 [2024-04-17 06:56:04.643331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.210 [2024-04-17 06:56:04.643346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.210 [2024-04-17 06:56:04.653274] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.210 [2024-04-17 06:56:04.653303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.210 [2024-04-17 06:56:04.653319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.210 [2024-04-17 06:56:04.663125] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.210 [2024-04-17 06:56:04.663166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.210 [2024-04-17 06:56:04.663189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.210 [2024-04-17 06:56:04.673210] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.210 [2024-04-17 06:56:04.673238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.210 [2024-04-17 06:56:04.673254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.210 [2024-04-17 06:56:04.683271] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.210 [2024-04-17 06:56:04.683298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.210 [2024-04-17 06:56:04.683313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.210 [2024-04-17 06:56:04.693099] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.210 [2024-04-17 06:56:04.693130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.210 [2024-04-17 06:56:04.693149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.210 [2024-04-17 06:56:04.703171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.210 [2024-04-17 06:56:04.703224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.210 [2024-04-17 06:56:04.703262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.210 [2024-04-17 06:56:04.713110] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.210 [2024-04-17 06:56:04.713155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.210 [2024-04-17 06:56:04.713173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.210 [2024-04-17 06:56:04.723239] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.210 [2024-04-17 06:56:04.723267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.210 [2024-04-17 06:56:04.723283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.210 [2024-04-17 06:56:04.733385] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.210 [2024-04-17 06:56:04.733412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.210 [2024-04-17 06:56:04.733427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.210 [2024-04-17 06:56:04.743428] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.210 [2024-04-17 06:56:04.743455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.210 [2024-04-17 06:56:04.743469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.210 [2024-04-17 06:56:04.753547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.210 [2024-04-17 06:56:04.753579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.210 [2024-04-17 06:56:04.753598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.210 [2024-04-17 06:56:04.763527] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.210 [2024-04-17 06:56:04.763560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.210 [2024-04-17 06:56:04.763578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.210 [2024-04-17 06:56:04.773477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.210 [2024-04-17 06:56:04.773520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.210 [2024-04-17 06:56:04.773535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.210 [2024-04-17 06:56:04.783411] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.210 [2024-04-17 06:56:04.783453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.210 [2024-04-17 06:56:04.783468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.210 [2024-04-17 06:56:04.793409] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.210 [2024-04-17 06:56:04.793441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.210 [2024-04-17 06:56:04.793474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.210 [2024-04-17 06:56:04.803407] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.210 [2024-04-17 06:56:04.803435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.210 [2024-04-17 06:56:04.803469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.210 [2024-04-17 06:56:04.813505] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.210 [2024-04-17 06:56:04.813537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.210 [2024-04-17 06:56:04.813554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.469 [2024-04-17 06:56:04.823704] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.469 [2024-04-17 06:56:04.823736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.469 [2024-04-17 06:56:04.823754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.469 [2024-04-17 06:56:04.833753] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.469 [2024-04-17 06:56:04.833786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.469 [2024-04-17 06:56:04.833803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.469 [2024-04-17 06:56:04.843660] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.469 [2024-04-17 06:56:04.843692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.469 [2024-04-17 06:56:04.843709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.469 [2024-04-17 06:56:04.853566] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.469 [2024-04-17 06:56:04.853609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.469 [2024-04-17 06:56:04.853624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.469 [2024-04-17 06:56:04.863470] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.469 [2024-04-17 06:56:04.863501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.469 [2024-04-17 06:56:04.863519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.469 [2024-04-17 06:56:04.873518] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.469 [2024-04-17 06:56:04.873550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.469 [2024-04-17 06:56:04.873567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.469 [2024-04-17 06:56:04.883324] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.469 [2024-04-17 06:56:04.883351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.469 [2024-04-17 06:56:04.883367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.469 [2024-04-17 06:56:04.893256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.469 [2024-04-17 06:56:04.893299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.469 [2024-04-17 06:56:04.893314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.469 [2024-04-17 06:56:04.903080] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.469 [2024-04-17 06:56:04.903113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.469 [2024-04-17 06:56:04.903131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.469 [2024-04-17 06:56:04.913095] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.469 [2024-04-17 06:56:04.913128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.469 [2024-04-17 06:56:04.913146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.469 [2024-04-17 06:56:04.923238] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.469 [2024-04-17 06:56:04.923266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.469 [2024-04-17 06:56:04.923282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.469 [2024-04-17 06:56:04.933078] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.469 [2024-04-17 06:56:04.933109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.469 [2024-04-17 06:56:04.933126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.469 [2024-04-17 06:56:04.943114] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.469 [2024-04-17 06:56:04.943145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.469 [2024-04-17 06:56:04.943163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.469 [2024-04-17 06:56:04.953243] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.469 [2024-04-17 06:56:04.953285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.469 [2024-04-17 06:56:04.953300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.469 [2024-04-17 06:56:04.963353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.469 [2024-04-17 06:56:04.963379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.469 [2024-04-17 06:56:04.963401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.469 [2024-04-17 06:56:04.973364] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.469 [2024-04-17 06:56:04.973391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.469 [2024-04-17 06:56:04.973406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.469 [2024-04-17 06:56:04.983379] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.469 [2024-04-17 06:56:04.983406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.469 [2024-04-17 06:56:04.983436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.469 [2024-04-17 06:56:04.993327] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.469 [2024-04-17 06:56:04.993354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.469 [2024-04-17 06:56:04.993369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.469 [2024-04-17 06:56:05.003270] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.469 [2024-04-17 06:56:05.003313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.469 [2024-04-17 06:56:05.003328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.469 [2024-04-17 06:56:05.013261] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.469 [2024-04-17 06:56:05.013302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.469 [2024-04-17 06:56:05.013317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.469 [2024-04-17 06:56:05.023190] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.469 [2024-04-17 06:56:05.023233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.469 [2024-04-17 06:56:05.023248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.469 [2024-04-17 06:56:05.033300] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.469 [2024-04-17 06:56:05.033326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.469 [2024-04-17 06:56:05.033341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.469 [2024-04-17 06:56:05.043257] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.470 [2024-04-17 06:56:05.043284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.470 [2024-04-17 06:56:05.043299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.470 [2024-04-17 06:56:05.053228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.470 [2024-04-17 06:56:05.053271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.470 [2024-04-17 06:56:05.053287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.470 [2024-04-17 06:56:05.063247] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.470 [2024-04-17 06:56:05.063288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.470 [2024-04-17 06:56:05.063303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.470 [2024-04-17 06:56:05.073205] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.470 [2024-04-17 06:56:05.073233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.470 [2024-04-17 06:56:05.073263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.728 [2024-04-17 06:56:05.083063] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.728 [2024-04-17 06:56:05.083095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.728 [2024-04-17 06:56:05.083113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.728 [2024-04-17 06:56:05.093061] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.728 [2024-04-17 06:56:05.093087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.728 [2024-04-17 06:56:05.093102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.728 [2024-04-17 06:56:05.102955] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.728 [2024-04-17 06:56:05.102987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.728 [2024-04-17 06:56:05.103004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.728 [2024-04-17 06:56:05.112475] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12993b0) 00:30:00.728 [2024-04-17 06:56:05.112503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.728 [2024-04-17 06:56:05.112519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.728 00:30:00.728 Latency(us) 00:30:00.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:00.728 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:00.728 nvme0n1 : 2.00 3096.90 387.11 0.00 0.00 5160.54 4708.88 11165.39 00:30:00.728 =================================================================================================================== 00:30:00.728 Total : 3096.90 387.11 0.00 0.00 5160.54 4708.88 11165.39 00:30:00.728 0 00:30:00.728 06:56:05 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:00.728 06:56:05 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:00.728 06:56:05 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:00.728 06:56:05 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:00.728 | .driver_specific 00:30:00.728 | .nvme_error 00:30:00.728 | .status_code 00:30:00.728 | .command_transient_transport_error' 00:30:00.986 06:56:05 -- host/digest.sh@71 -- # (( 200 > 0 )) 00:30:00.986 06:56:05 -- host/digest.sh@73 -- # killprocess 115868 00:30:00.986 06:56:05 -- common/autotest_common.sh@936 -- # '[' -z 115868 ']' 00:30:00.986 06:56:05 -- common/autotest_common.sh@940 -- # kill -0 115868 00:30:00.986 06:56:05 -- common/autotest_common.sh@941 -- # uname 00:30:00.986 06:56:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:00.986 06:56:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 115868 00:30:00.986 06:56:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:00.986 06:56:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:00.986 06:56:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 115868' 00:30:00.986 killing process with pid 115868 00:30:00.986 06:56:05 -- common/autotest_common.sh@955 -- # kill 115868 00:30:00.986 Received shutdown signal, test time was about 2.000000 seconds 00:30:00.986 00:30:00.986 Latency(us) 00:30:00.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:00.986 =================================================================================================================== 00:30:00.986 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:00.986 06:56:05 -- common/autotest_common.sh@960 -- # wait 115868 00:30:01.244 06:56:05 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:30:01.244 06:56:05 -- host/digest.sh@54 -- # local rw bs qd 00:30:01.244 06:56:05 -- host/digest.sh@56 -- # rw=randwrite 00:30:01.244 06:56:05 -- host/digest.sh@56 -- # bs=4096 00:30:01.244 06:56:05 -- host/digest.sh@56 -- # qd=128 00:30:01.244 06:56:05 -- host/digest.sh@58 -- # bperfpid=116279 00:30:01.244 06:56:05 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:30:01.244 06:56:05 -- host/digest.sh@60 -- # waitforlisten 116279 /var/tmp/bperf.sock 00:30:01.244 06:56:05 -- common/autotest_common.sh@817 -- # '[' -z 116279 ']' 00:30:01.244 06:56:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:01.244 06:56:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:01.244 06:56:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:01.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:01.244 06:56:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:01.244 06:56:05 -- common/autotest_common.sh@10 -- # set +x 00:30:01.244 [2024-04-17 06:56:05.665717] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:30:01.244 [2024-04-17 06:56:05.665797] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116279 ] 00:30:01.244 EAL: No free 2048 kB hugepages reported on node 1 00:30:01.244 [2024-04-17 06:56:05.728533] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.244 [2024-04-17 06:56:05.814507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:01.502 06:56:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:01.502 06:56:05 -- common/autotest_common.sh@850 -- # return 0 00:30:01.502 06:56:05 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:01.502 06:56:05 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:01.760 06:56:06 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:01.760 06:56:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:01.760 06:56:06 -- common/autotest_common.sh@10 -- # set +x 00:30:01.760 06:56:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.760 06:56:06 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:01.760 06:56:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:02.018 nvme0n1 00:30:02.018 06:56:06 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:02.018 06:56:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.018 06:56:06 -- common/autotest_common.sh@10 -- # set +x 00:30:02.018 06:56:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.018 06:56:06 -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:02.018 06:56:06 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:02.276 Running I/O for 2 seconds... 00:30:02.276 [2024-04-17 06:56:06.654606] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f8a50 00:30:02.276 [2024-04-17 06:56:06.655463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.276 [2024-04-17 06:56:06.655504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.276 [2024-04-17 06:56:06.667289] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f7100 00:30:02.276 [2024-04-17 06:56:06.668184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.276 [2024-04-17 06:56:06.668216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.276 [2024-04-17 06:56:06.679162] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e1710 00:30:02.276 [2024-04-17 06:56:06.680150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.276 [2024-04-17 06:56:06.680186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:02.276 [2024-04-17 06:56:06.691741] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f2948 00:30:02.276 [2024-04-17 06:56:06.692960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.276 [2024-04-17 06:56:06.692990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:02.276 [2024-04-17 06:56:06.704299] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f1ca0 00:30:02.276 [2024-04-17 06:56:06.705656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.276 [2024-04-17 06:56:06.705686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:02.276 [2024-04-17 06:56:06.715472] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190ed920 00:30:02.276 [2024-04-17 06:56:06.716326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.276 [2024-04-17 06:56:06.716355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:02.276 [2024-04-17 06:56:06.726516] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f7da8 00:30:02.276 [2024-04-17 06:56:06.727350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.276 [2024-04-17 06:56:06.727378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:02.276 [2024-04-17 06:56:06.739243] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190efae0 00:30:02.277 [2024-04-17 06:56:06.740292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.277 [2024-04-17 06:56:06.740322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:02.277 [2024-04-17 06:56:06.751797] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e1710 00:30:02.277 [2024-04-17 06:56:06.752937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.277 [2024-04-17 06:56:06.752966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:02.277 [2024-04-17 06:56:06.764444] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e12d8 00:30:02.277 [2024-04-17 06:56:06.765832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.277 [2024-04-17 06:56:06.765861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:02.277 [2024-04-17 06:56:06.777118] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190fdeb0 00:30:02.277 [2024-04-17 06:56:06.778633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.277 [2024-04-17 06:56:06.778663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:02.277 [2024-04-17 06:56:06.789719] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190efae0 00:30:02.277 [2024-04-17 06:56:06.791346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.277 [2024-04-17 06:56:06.791374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:02.277 [2024-04-17 06:56:06.802265] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e5220 00:30:02.277 [2024-04-17 06:56:06.804070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.277 [2024-04-17 06:56:06.804099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:02.277 [2024-04-17 06:56:06.814769] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f4b08 00:30:02.277 [2024-04-17 06:56:06.816814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.277 [2024-04-17 06:56:06.816842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:02.277 [2024-04-17 06:56:06.823385] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e5a90 00:30:02.277 [2024-04-17 06:56:06.824284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.277 [2024-04-17 06:56:06.824312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:02.277 [2024-04-17 06:56:06.835634] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190dece0 00:30:02.277 [2024-04-17 06:56:06.836464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.277 [2024-04-17 06:56:06.836499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:02.277 [2024-04-17 06:56:06.847705] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190ddc00 00:30:02.277 [2024-04-17 06:56:06.848637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.277 [2024-04-17 06:56:06.848665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:02.277 [2024-04-17 06:56:06.859858] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e3d08 00:30:02.277 [2024-04-17 06:56:06.860803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.277 [2024-04-17 06:56:06.860831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:02.277 [2024-04-17 06:56:06.872222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e4de8 00:30:02.277 [2024-04-17 06:56:06.873117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.277 [2024-04-17 06:56:06.873145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:02.535 [2024-04-17 06:56:06.884751] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e2c28 00:30:02.535 [2024-04-17 06:56:06.885650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.535 [2024-04-17 06:56:06.885679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:02.535 [2024-04-17 06:56:06.897062] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190ebfd0 00:30:02.535 [2024-04-17 06:56:06.897955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.535 [2024-04-17 06:56:06.897984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:02.535 [2024-04-17 06:56:06.909318] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190ed0b0 00:30:02.535 [2024-04-17 06:56:06.910171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.535 [2024-04-17 06:56:06.910206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:02.535 [2024-04-17 06:56:06.921359] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f2d80 00:30:02.536 [2024-04-17 06:56:06.922228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.536 [2024-04-17 06:56:06.922257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:02.536 [2024-04-17 06:56:06.933774] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f6020 00:30:02.536 [2024-04-17 06:56:06.934777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.536 [2024-04-17 06:56:06.934805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:02.536 [2024-04-17 06:56:06.946334] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f31b8 00:30:02.536 [2024-04-17 06:56:06.947478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.536 [2024-04-17 06:56:06.947507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:02.536 [2024-04-17 06:56:06.958764] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f57b0 00:30:02.536 [2024-04-17 06:56:06.960082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.536 [2024-04-17 06:56:06.960111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:02.536 [2024-04-17 06:56:06.970889] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190fac10 00:30:02.536 [2024-04-17 06:56:06.972219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.536 [2024-04-17 06:56:06.972247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:02.536 [2024-04-17 06:56:06.982197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e0ea0 00:30:02.536 [2024-04-17 06:56:06.983465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.536 [2024-04-17 06:56:06.983493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:02.536 [2024-04-17 06:56:06.994764] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190fb8b8 00:30:02.536 [2024-04-17 06:56:06.996242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.536 [2024-04-17 06:56:06.996270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:02.536 [2024-04-17 06:56:07.007279] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f0bc0 00:30:02.536 [2024-04-17 06:56:07.008924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.536 [2024-04-17 06:56:07.008952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:02.536 [2024-04-17 06:56:07.019824] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190fa3a0 00:30:02.536 [2024-04-17 06:56:07.021695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.536 [2024-04-17 06:56:07.021724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:02.536 [2024-04-17 06:56:07.032494] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190eb760 00:30:02.536 [2024-04-17 06:56:07.034432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.536 [2024-04-17 06:56:07.034461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:02.536 [2024-04-17 06:56:07.041096] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e9168 00:30:02.536 [2024-04-17 06:56:07.042042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.536 [2024-04-17 06:56:07.042071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:02.536 [2024-04-17 06:56:07.053380] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e3498 00:30:02.536 [2024-04-17 06:56:07.054214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.536 [2024-04-17 06:56:07.054242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:02.536 [2024-04-17 06:56:07.065436] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190feb58 00:30:02.536 [2024-04-17 06:56:07.066258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.536 [2024-04-17 06:56:07.066286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:02.536 [2024-04-17 06:56:07.077830] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190fef90 00:30:02.536 [2024-04-17 06:56:07.078839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.536 [2024-04-17 06:56:07.078867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:02.536 [2024-04-17 06:56:07.090419] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e38d0 00:30:02.536 [2024-04-17 06:56:07.091664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.536 [2024-04-17 06:56:07.091693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:02.536 [2024-04-17 06:56:07.102741] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e95a0 00:30:02.536 [2024-04-17 06:56:07.104005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.536 [2024-04-17 06:56:07.104034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:02.536 [2024-04-17 06:56:07.115106] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e6300 00:30:02.536 [2024-04-17 06:56:07.116343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.536 [2024-04-17 06:56:07.116371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:02.536 [2024-04-17 06:56:07.127296] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190fbcf0 00:30:02.536 [2024-04-17 06:56:07.128520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.536 [2024-04-17 06:56:07.128548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:02.536 [2024-04-17 06:56:07.139698] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190fcdd0 00:30:02.536 [2024-04-17 06:56:07.141003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.536 [2024-04-17 06:56:07.141032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:02.795 [2024-04-17 06:56:07.152296] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f81e0 00:30:02.795 [2024-04-17 06:56:07.153570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.795 [2024-04-17 06:56:07.153605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:02.795 [2024-04-17 06:56:07.164603] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e84c0 00:30:02.795 [2024-04-17 06:56:07.165847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.795 [2024-04-17 06:56:07.165875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:02.795 [2024-04-17 06:56:07.177095] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f5be8 00:30:02.795 [2024-04-17 06:56:07.178459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.795 [2024-04-17 06:56:07.178488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:02.795 [2024-04-17 06:56:07.190009] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e7c50 00:30:02.795 [2024-04-17 06:56:07.191550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.795 [2024-04-17 06:56:07.191579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:02.795 [2024-04-17 06:56:07.201245] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e7818 00:30:02.795 [2024-04-17 06:56:07.202568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.795 [2024-04-17 06:56:07.202597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:02.795 [2024-04-17 06:56:07.212718] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190eea00 00:30:02.795 [2024-04-17 06:56:07.214076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.795 [2024-04-17 06:56:07.214104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:02.795 [2024-04-17 06:56:07.225346] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e5658 00:30:02.795 [2024-04-17 06:56:07.226938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.795 [2024-04-17 06:56:07.226968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:02.795 [2024-04-17 06:56:07.238103] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e23b8 00:30:02.795 [2024-04-17 06:56:07.239835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.795 [2024-04-17 06:56:07.239863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:02.795 [2024-04-17 06:56:07.250784] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e0630 00:30:02.795 [2024-04-17 06:56:07.252523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.795 [2024-04-17 06:56:07.252551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:02.795 [2024-04-17 06:56:07.263432] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f2948 00:30:02.795 [2024-04-17 06:56:07.265630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.795 [2024-04-17 06:56:07.265661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:02.795 [2024-04-17 06:56:07.272601] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190fac10 00:30:02.795 [2024-04-17 06:56:07.273514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.795 [2024-04-17 06:56:07.273546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:02.796 [2024-04-17 06:56:07.286355] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190eff18 00:30:02.796 [2024-04-17 06:56:07.287432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.796 [2024-04-17 06:56:07.287488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:02.796 [2024-04-17 06:56:07.300072] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e5220 00:30:02.796 [2024-04-17 06:56:07.301368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.796 [2024-04-17 06:56:07.301397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:02.796 [2024-04-17 06:56:07.313762] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190eb760 00:30:02.796 [2024-04-17 06:56:07.315216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.796 [2024-04-17 06:56:07.315255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:02.796 [2024-04-17 06:56:07.325892] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190ebb98 00:30:02.796 [2024-04-17 06:56:07.327193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.796 [2024-04-17 06:56:07.327239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:02.796 [2024-04-17 06:56:07.338389] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190fac10 00:30:02.796 [2024-04-17 06:56:07.339638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.796 [2024-04-17 06:56:07.339670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:02.796 [2024-04-17 06:56:07.351943] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e9e10 00:30:02.796 [2024-04-17 06:56:07.353381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.796 [2024-04-17 06:56:07.353409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:02.796 [2024-04-17 06:56:07.365451] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190eee38 00:30:02.796 [2024-04-17 06:56:07.367089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.796 [2024-04-17 06:56:07.367121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:02.796 [2024-04-17 06:56:07.379116] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190ee5c8 00:30:02.796 [2024-04-17 06:56:07.380899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.796 [2024-04-17 06:56:07.380930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:02.796 [2024-04-17 06:56:07.392672] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e95a0 00:30:02.796 [2024-04-17 06:56:07.394647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.796 [2024-04-17 06:56:07.394679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:03.054 [2024-04-17 06:56:07.406843] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e5ec8 00:30:03.054 [2024-04-17 06:56:07.408980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.054 [2024-04-17 06:56:07.409012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:03.054 [2024-04-17 06:56:07.416050] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e1f80 00:30:03.054 [2024-04-17 06:56:07.416986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.054 [2024-04-17 06:56:07.417018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:03.054 [2024-04-17 06:56:07.430989] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f8e88 00:30:03.054 [2024-04-17 06:56:07.432109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.054 [2024-04-17 06:56:07.432140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:03.054 [2024-04-17 06:56:07.443168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190fa7d8 00:30:03.054 [2024-04-17 06:56:07.444883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.054 [2024-04-17 06:56:07.444915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:03.054 [2024-04-17 06:56:07.454377] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190fef90 00:30:03.054 [2024-04-17 06:56:07.455265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.054 [2024-04-17 06:56:07.455292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:03.054 [2024-04-17 06:56:07.467935] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f9f68 00:30:03.055 [2024-04-17 06:56:07.468972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.055 [2024-04-17 06:56:07.469004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:03.055 [2024-04-17 06:56:07.481496] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e5220 00:30:03.055 [2024-04-17 06:56:07.482734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.055 [2024-04-17 06:56:07.482766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:03.055 [2024-04-17 06:56:07.495143] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e6300 00:30:03.055 [2024-04-17 06:56:07.496846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.055 [2024-04-17 06:56:07.496877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:03.055 [2024-04-17 06:56:07.509440] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e6300 00:30:03.055 [2024-04-17 06:56:07.510853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.055 [2024-04-17 06:56:07.510885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:03.055 [2024-04-17 06:56:07.523910] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e6300 00:30:03.055 [2024-04-17 06:56:07.526018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.055 [2024-04-17 06:56:07.526049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:03.055 [2024-04-17 06:56:07.533151] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e38d0 00:30:03.055 [2024-04-17 06:56:07.534007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.055 [2024-04-17 06:56:07.534038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:03.055 [2024-04-17 06:56:07.546745] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e2c28 00:30:03.055 [2024-04-17 06:56:07.547813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.055 [2024-04-17 06:56:07.547844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:03.055 [2024-04-17 06:56:07.560027] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190fb480 00:30:03.055 [2024-04-17 06:56:07.561113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.055 [2024-04-17 06:56:07.561143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:03.055 [2024-04-17 06:56:07.573036] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f8618 00:30:03.055 [2024-04-17 06:56:07.574110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.055 [2024-04-17 06:56:07.574141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:03.055 [2024-04-17 06:56:07.585198] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190eb328 00:30:03.055 [2024-04-17 06:56:07.586259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.055 [2024-04-17 06:56:07.586287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:03.055 [2024-04-17 06:56:07.598748] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f1430 00:30:03.055 [2024-04-17 06:56:07.599971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.055 [2024-04-17 06:56:07.600007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:03.055 [2024-04-17 06:56:07.612282] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190fe2e8 00:30:03.055 [2024-04-17 06:56:07.613672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.055 [2024-04-17 06:56:07.613703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:03.055 [2024-04-17 06:56:07.625938] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f1868 00:30:03.055 [2024-04-17 06:56:07.627548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.055 [2024-04-17 06:56:07.627579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:03.055 [2024-04-17 06:56:07.639525] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190eb328 00:30:03.055 [2024-04-17 06:56:07.641281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.055 [2024-04-17 06:56:07.641309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:03.055 [2024-04-17 06:56:07.653042] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e0630 00:30:03.055 [2024-04-17 06:56:07.654980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.055 [2024-04-17 06:56:07.655011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:03.314 [2024-04-17 06:56:07.667299] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f0788 00:30:03.314 [2024-04-17 06:56:07.669420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.314 [2024-04-17 06:56:07.669449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:03.314 [2024-04-17 06:56:07.676486] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e6738 00:30:03.314 [2024-04-17 06:56:07.677366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.314 [2024-04-17 06:56:07.677395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:03.314 [2024-04-17 06:56:07.690106] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f7da8 00:30:03.314 [2024-04-17 06:56:07.691165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.314 [2024-04-17 06:56:07.691204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:03.314 [2024-04-17 06:56:07.702224] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190ee190 00:30:03.314 [2024-04-17 06:56:07.703075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.314 [2024-04-17 06:56:07.703106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:03.314 [2024-04-17 06:56:07.714737] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190df988 00:30:03.314 [2024-04-17 06:56:07.715641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.314 [2024-04-17 06:56:07.715673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:03.314 [2024-04-17 06:56:07.728354] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190ee5c8 00:30:03.314 [2024-04-17 06:56:07.729476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.314 [2024-04-17 06:56:07.729520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:03.314 [2024-04-17 06:56:07.742026] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e38d0 00:30:03.314 [2024-04-17 06:56:07.743243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.314 [2024-04-17 06:56:07.743272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:03.314 [2024-04-17 06:56:07.755544] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e7c50 00:30:03.314 [2024-04-17 06:56:07.756940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.314 [2024-04-17 06:56:07.756971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:03.314 [2024-04-17 06:56:07.769295] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f31b8 00:30:03.314 [2024-04-17 06:56:07.770836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.314 [2024-04-17 06:56:07.770868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:03.314 [2024-04-17 06:56:07.782783] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190ee5c8 00:30:03.314 [2024-04-17 06:56:07.784576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.314 [2024-04-17 06:56:07.784608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:03.314 [2024-04-17 06:56:07.796038] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f4f40 00:30:03.314 [2024-04-17 06:56:07.797774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.314 [2024-04-17 06:56:07.797806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:03.314 [2024-04-17 06:56:07.807493] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f35f0 00:30:03.314 [2024-04-17 06:56:07.809156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.314 [2024-04-17 06:56:07.809194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:03.314 [2024-04-17 06:56:07.818558] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190ebfd0 00:30:03.314 [2024-04-17 06:56:07.819423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.314 [2024-04-17 06:56:07.819451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:03.314 [2024-04-17 06:56:07.832203] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f4b08 00:30:03.314 [2024-04-17 06:56:07.833239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.314 [2024-04-17 06:56:07.833268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:03.314 [2024-04-17 06:56:07.845646] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f7100 00:30:03.314 [2024-04-17 06:56:07.846880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.314 [2024-04-17 06:56:07.846912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:03.314 [2024-04-17 06:56:07.859198] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f92c0 00:30:03.314 [2024-04-17 06:56:07.860633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.314 [2024-04-17 06:56:07.860665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:03.314 [2024-04-17 06:56:07.872799] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e0ea0 00:30:03.314 [2024-04-17 06:56:07.874450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.314 [2024-04-17 06:56:07.874494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:03.314 [2024-04-17 06:56:07.886321] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f4b08 00:30:03.314 [2024-04-17 06:56:07.888065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.314 [2024-04-17 06:56:07.888097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:03.314 [2024-04-17 06:56:07.899818] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e0630 00:30:03.314 [2024-04-17 06:56:07.901783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.314 [2024-04-17 06:56:07.901814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:03.314 [2024-04-17 06:56:07.913384] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f6cc8 00:30:03.314 [2024-04-17 06:56:07.915566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.314 [2024-04-17 06:56:07.915598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:03.573 [2024-04-17 06:56:07.922863] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190ebfd0 00:30:03.573 [2024-04-17 06:56:07.923791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.573 [2024-04-17 06:56:07.923822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:03.573 [2024-04-17 06:56:07.936609] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f2948 00:30:03.573 [2024-04-17 06:56:07.937695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.573 [2024-04-17 06:56:07.937732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:03.573 [2024-04-17 06:56:07.949651] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f2d80 00:30:03.573 [2024-04-17 06:56:07.950851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.573 [2024-04-17 06:56:07.950882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:03.573 [2024-04-17 06:56:07.963120] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e3d08 00:30:03.573 [2024-04-17 06:56:07.964517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.573 [2024-04-17 06:56:07.964560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:03.573 [2024-04-17 06:56:07.976749] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190eee38 00:30:03.573 [2024-04-17 06:56:07.978382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.573 [2024-04-17 06:56:07.978411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:03.573 [2024-04-17 06:56:07.990276] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190eaab8 00:30:03.573 [2024-04-17 06:56:07.991997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.573 [2024-04-17 06:56:07.992029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:03.573 [2024-04-17 06:56:08.003820] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e5a90 00:30:03.573 [2024-04-17 06:56:08.005746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.573 [2024-04-17 06:56:08.005778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:03.573 [2024-04-17 06:56:08.017398] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f20d8 00:30:03.573 [2024-04-17 06:56:08.019571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.573 [2024-04-17 06:56:08.019602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:03.573 [2024-04-17 06:56:08.026517] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f0ff8 00:30:03.573 [2024-04-17 06:56:08.027410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.573 [2024-04-17 06:56:08.027438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:03.573 [2024-04-17 06:56:08.041538] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190fc128 00:30:03.573 [2024-04-17 06:56:08.042621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.573 [2024-04-17 06:56:08.042652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:03.573 [2024-04-17 06:56:08.053693] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f7970 00:30:03.573 [2024-04-17 06:56:08.055443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.573 [2024-04-17 06:56:08.055486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:03.573 [2024-04-17 06:56:08.064811] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f0788 00:30:03.574 [2024-04-17 06:56:08.065639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.574 [2024-04-17 06:56:08.065670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:03.574 [2024-04-17 06:56:08.078319] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190df118 00:30:03.574 [2024-04-17 06:56:08.079369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.574 [2024-04-17 06:56:08.079397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:03.574 [2024-04-17 06:56:08.091949] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190eee38 00:30:03.574 [2024-04-17 06:56:08.093159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.574 [2024-04-17 06:56:08.093198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:03.574 [2024-04-17 06:56:08.105596] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190ed0b0 00:30:03.574 [2024-04-17 06:56:08.106979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.574 [2024-04-17 06:56:08.107011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:03.574 [2024-04-17 06:56:08.119058] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f1ca0 00:30:03.574 [2024-04-17 06:56:08.120635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.574 [2024-04-17 06:56:08.120666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:03.574 [2024-04-17 06:56:08.132626] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190fdeb0 00:30:03.574 [2024-04-17 06:56:08.134412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.574 [2024-04-17 06:56:08.134440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:03.574 [2024-04-17 06:56:08.146532] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f3e60 00:30:03.574 [2024-04-17 06:56:08.148472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.574 [2024-04-17 06:56:08.148517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:03.574 [2024-04-17 06:56:08.160110] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f8618 00:30:03.574 [2024-04-17 06:56:08.162230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.574 [2024-04-17 06:56:08.162258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:03.574 [2024-04-17 06:56:08.169389] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190ff3c8 00:30:03.574 [2024-04-17 06:56:08.170259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.574 [2024-04-17 06:56:08.170286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:03.833 [2024-04-17 06:56:08.181977] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190fd208 00:30:03.833 [2024-04-17 06:56:08.182839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.833 [2024-04-17 06:56:08.182871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:03.833 [2024-04-17 06:56:08.197650] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190fd208 00:30:03.833 [2024-04-17 06:56:08.199255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.833 [2024-04-17 06:56:08.199284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.833 [2024-04-17 06:56:08.211355] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190ff3c8 00:30:03.833 [2024-04-17 06:56:08.213069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.833 [2024-04-17 06:56:08.213100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:03.833 [2024-04-17 06:56:08.225041] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e8d30 00:30:03.833 [2024-04-17 06:56:08.226991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.833 [2024-04-17 06:56:08.227022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.833 [2024-04-17 06:56:08.238702] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e6738 00:30:03.833 [2024-04-17 06:56:08.240769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.833 [2024-04-17 06:56:08.240801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:03.833 [2024-04-17 06:56:08.247913] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f8618 00:30:03.833 [2024-04-17 06:56:08.248743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.833 [2024-04-17 06:56:08.248773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:03.833 [2024-04-17 06:56:08.262631] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f0bc0 00:30:03.833 [2024-04-17 06:56:08.263723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.833 [2024-04-17 06:56:08.263754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:03.833 [2024-04-17 06:56:08.275956] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f8618 00:30:03.833 [2024-04-17 06:56:08.277154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.833 [2024-04-17 06:56:08.277203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.833 [2024-04-17 06:56:08.287844] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190ed0b0 00:30:03.833 [2024-04-17 06:56:08.289747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.833 [2024-04-17 06:56:08.289775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.833 [2024-04-17 06:56:08.299089] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f3e60 00:30:03.833 [2024-04-17 06:56:08.300119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.833 [2024-04-17 06:56:08.300150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:03.833 [2024-04-17 06:56:08.312747] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f4298 00:30:03.833 [2024-04-17 06:56:08.313965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.833 [2024-04-17 06:56:08.313996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:03.833 [2024-04-17 06:56:08.326591] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190fd208 00:30:03.833 [2024-04-17 06:56:08.327991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.833 [2024-04-17 06:56:08.328023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:03.833 [2024-04-17 06:56:08.340465] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190fc560 00:30:03.833 [2024-04-17 06:56:08.342056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.833 [2024-04-17 06:56:08.342086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:03.833 [2024-04-17 06:56:08.353554] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f5be8 00:30:03.833 [2024-04-17 06:56:08.355124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.833 [2024-04-17 06:56:08.355151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:03.833 [2024-04-17 06:56:08.366245] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190df550 00:30:03.833 [2024-04-17 06:56:08.367909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.833 [2024-04-17 06:56:08.367936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:03.833 [2024-04-17 06:56:08.378853] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190eb760 00:30:03.833 [2024-04-17 06:56:08.380777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.833 [2024-04-17 06:56:08.380804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:03.833 [2024-04-17 06:56:08.391598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f7da8 00:30:03.833 [2024-04-17 06:56:08.393695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.833 [2024-04-17 06:56:08.393723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.833 [2024-04-17 06:56:08.400352] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190ea248 00:30:03.833 [2024-04-17 06:56:08.401240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.833 [2024-04-17 06:56:08.401270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:03.833 [2024-04-17 06:56:08.411854] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e27f0 00:30:03.833 [2024-04-17 06:56:08.412760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.833 [2024-04-17 06:56:08.412789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:03.833 [2024-04-17 06:56:08.424521] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e5658 00:30:03.833 [2024-04-17 06:56:08.425572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.833 [2024-04-17 06:56:08.425601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:03.833 [2024-04-17 06:56:08.437301] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190fd208 00:30:03.833 [2024-04-17 06:56:08.438691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.833 [2024-04-17 06:56:08.438720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:04.092 [2024-04-17 06:56:08.450374] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e88f8 00:30:04.092 [2024-04-17 06:56:08.451819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.092 [2024-04-17 06:56:08.451848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:04.092 [2024-04-17 06:56:08.463139] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190fb048 00:30:04.092 [2024-04-17 06:56:08.464773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.092 [2024-04-17 06:56:08.464801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:04.092 [2024-04-17 06:56:08.475816] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f0bc0 00:30:04.092 [2024-04-17 06:56:08.477475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.092 [2024-04-17 06:56:08.477502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:04.092 [2024-04-17 06:56:08.488514] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e7818 00:30:04.092 [2024-04-17 06:56:08.490344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.092 [2024-04-17 06:56:08.490373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:04.092 [2024-04-17 06:56:08.501032] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f4298 00:30:04.092 [2024-04-17 06:56:08.503080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.092 [2024-04-17 06:56:08.503108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.092 [2024-04-17 06:56:08.509673] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190fda78 00:30:04.092 [2024-04-17 06:56:08.510627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.092 [2024-04-17 06:56:08.510654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:04.092 [2024-04-17 06:56:08.521131] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f3e60 00:30:04.092 [2024-04-17 06:56:08.522033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.092 [2024-04-17 06:56:08.522060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:04.092 [2024-04-17 06:56:08.533826] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190df550 00:30:04.092 [2024-04-17 06:56:08.534909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.092 [2024-04-17 06:56:08.534937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:04.092 [2024-04-17 06:56:08.546428] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190fd208 00:30:04.092 [2024-04-17 06:56:08.547677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.092 [2024-04-17 06:56:08.547704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:04.092 [2024-04-17 06:56:08.559229] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190fe2e8 00:30:04.092 [2024-04-17 06:56:08.560579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.092 [2024-04-17 06:56:08.560607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:04.092 [2024-04-17 06:56:08.571682] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190fbcf0 00:30:04.092 [2024-04-17 06:56:08.573173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.092 [2024-04-17 06:56:08.573221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:04.092 [2024-04-17 06:56:08.584195] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190f7da8 00:30:04.092 [2024-04-17 06:56:08.585866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.092 [2024-04-17 06:56:08.585894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:04.092 [2024-04-17 06:56:08.596859] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190eb760 00:30:04.092 [2024-04-17 06:56:08.598744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.092 [2024-04-17 06:56:08.598771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:04.092 [2024-04-17 06:56:08.609467] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e5658 00:30:04.092 [2024-04-17 06:56:08.611473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.092 [2024-04-17 06:56:08.611517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.092 [2024-04-17 06:56:08.618018] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190de038 00:30:04.092 [2024-04-17 06:56:08.618951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.093 [2024-04-17 06:56:08.618978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:04.093 [2024-04-17 06:56:08.631782] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e73e0 00:30:04.093 [2024-04-17 06:56:08.633350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.093 [2024-04-17 06:56:08.633379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:04.093 [2024-04-17 06:56:08.642991] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbfbe0) with pdu=0x2000190e1710 00:30:04.093 [2024-04-17 06:56:08.644051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.093 [2024-04-17 06:56:08.644079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:04.093 00:30:04.093 Latency(us) 00:30:04.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.093 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:04.093 nvme0n1 : 2.00 20304.59 79.31 0.00 0.00 6293.62 2415.12 16214.09 00:30:04.093 =================================================================================================================== 00:30:04.093 Total : 20304.59 79.31 0.00 0.00 6293.62 2415.12 16214.09 00:30:04.093 0 00:30:04.093 06:56:08 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:04.093 06:56:08 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:04.093 06:56:08 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:04.093 | .driver_specific 00:30:04.093 | .nvme_error 00:30:04.093 | .status_code 00:30:04.093 | .command_transient_transport_error' 00:30:04.093 06:56:08 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:04.351 06:56:08 -- host/digest.sh@71 -- # (( 159 > 0 )) 00:30:04.351 06:56:08 -- host/digest.sh@73 -- # killprocess 116279 00:30:04.351 06:56:08 -- common/autotest_common.sh@936 -- # '[' -z 116279 ']' 00:30:04.351 06:56:08 -- common/autotest_common.sh@940 -- # kill -0 116279 00:30:04.351 06:56:08 -- common/autotest_common.sh@941 -- # uname 00:30:04.351 06:56:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:04.351 06:56:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116279 00:30:04.351 06:56:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:04.351 06:56:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:04.351 06:56:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116279' 00:30:04.351 killing process with pid 116279 00:30:04.351 06:56:08 -- common/autotest_common.sh@955 -- # kill 116279 00:30:04.351 Received shutdown signal, test time was about 2.000000 seconds 00:30:04.351 00:30:04.351 Latency(us) 00:30:04.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.351 =================================================================================================================== 00:30:04.351 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:04.351 06:56:08 -- common/autotest_common.sh@960 -- # wait 116279 00:30:04.631 06:56:09 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:30:04.631 06:56:09 -- host/digest.sh@54 -- # local rw bs qd 00:30:04.631 06:56:09 -- host/digest.sh@56 -- # rw=randwrite 00:30:04.631 06:56:09 -- host/digest.sh@56 -- # bs=131072 00:30:04.631 06:56:09 -- host/digest.sh@56 -- # qd=16 00:30:04.631 06:56:09 -- host/digest.sh@58 -- # bperfpid=116683 00:30:04.631 06:56:09 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:30:04.631 06:56:09 -- host/digest.sh@60 -- # waitforlisten 116683 /var/tmp/bperf.sock 00:30:04.631 06:56:09 -- common/autotest_common.sh@817 -- # '[' -z 116683 ']' 00:30:04.631 06:56:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:04.631 06:56:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:04.631 06:56:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:04.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:04.631 06:56:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:04.631 06:56:09 -- common/autotest_common.sh@10 -- # set +x 00:30:04.631 [2024-04-17 06:56:09.208366] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:30:04.631 [2024-04-17 06:56:09.208460] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116683 ] 00:30:04.631 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:04.631 Zero copy mechanism will not be used. 00:30:04.631 EAL: No free 2048 kB hugepages reported on node 1 00:30:04.889 [2024-04-17 06:56:09.267483] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.889 [2024-04-17 06:56:09.354272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.889 06:56:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:04.889 06:56:09 -- common/autotest_common.sh@850 -- # return 0 00:30:04.889 06:56:09 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:04.889 06:56:09 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:05.147 06:56:09 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:05.147 06:56:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:05.147 06:56:09 -- common/autotest_common.sh@10 -- # set +x 00:30:05.147 06:56:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:05.147 06:56:09 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:05.147 06:56:09 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:05.716 nvme0n1 00:30:05.716 06:56:10 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:05.716 06:56:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:05.716 06:56:10 -- common/autotest_common.sh@10 -- # set +x 00:30:05.716 06:56:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:05.716 06:56:10 -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:05.716 06:56:10 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:05.716 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:05.717 Zero copy mechanism will not be used. 00:30:05.717 Running I/O for 2 seconds... 00:30:05.717 [2024-04-17 06:56:10.274542] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:05.717 [2024-04-17 06:56:10.274969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.717 [2024-04-17 06:56:10.275021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:05.717 [2024-04-17 06:56:10.291422] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:05.717 [2024-04-17 06:56:10.291839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.717 [2024-04-17 06:56:10.291874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:05.717 [2024-04-17 06:56:10.309358] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:05.717 [2024-04-17 06:56:10.309790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.717 [2024-04-17 06:56:10.309825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:05.976 [2024-04-17 06:56:10.326351] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:05.976 [2024-04-17 06:56:10.326749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.976 [2024-04-17 06:56:10.326779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:05.976 [2024-04-17 06:56:10.343068] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:05.976 [2024-04-17 06:56:10.343450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.976 [2024-04-17 06:56:10.343480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:05.976 [2024-04-17 06:56:10.359056] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:05.976 [2024-04-17 06:56:10.359445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.976 [2024-04-17 06:56:10.359475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:05.976 [2024-04-17 06:56:10.377281] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:05.976 [2024-04-17 06:56:10.377689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.976 [2024-04-17 06:56:10.377720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:05.976 [2024-04-17 06:56:10.393367] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:05.976 [2024-04-17 06:56:10.393694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.976 [2024-04-17 06:56:10.393722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:05.976 [2024-04-17 06:56:10.409702] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:05.976 [2024-04-17 06:56:10.410050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.976 [2024-04-17 06:56:10.410078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:05.976 [2024-04-17 06:56:10.425459] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:05.976 [2024-04-17 06:56:10.425811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.976 [2024-04-17 06:56:10.425842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:05.976 [2024-04-17 06:56:10.442620] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:05.976 [2024-04-17 06:56:10.442978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.976 [2024-04-17 06:56:10.443007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:05.976 [2024-04-17 06:56:10.461866] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:05.976 [2024-04-17 06:56:10.462229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.976 [2024-04-17 06:56:10.462260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:05.976 [2024-04-17 06:56:10.476459] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:05.976 [2024-04-17 06:56:10.476705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.976 [2024-04-17 06:56:10.476747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:05.976 [2024-04-17 06:56:10.491831] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:05.976 [2024-04-17 06:56:10.492215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.976 [2024-04-17 06:56:10.492247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:05.976 [2024-04-17 06:56:10.509107] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:05.976 [2024-04-17 06:56:10.509584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.976 [2024-04-17 06:56:10.509621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:05.976 [2024-04-17 06:56:10.525035] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:05.976 [2024-04-17 06:56:10.525412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.976 [2024-04-17 06:56:10.525442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:05.976 [2024-04-17 06:56:10.540839] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:05.976 [2024-04-17 06:56:10.541209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.976 [2024-04-17 06:56:10.541239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:05.976 [2024-04-17 06:56:10.557493] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:05.976 [2024-04-17 06:56:10.557895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.976 [2024-04-17 06:56:10.557925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:05.977 [2024-04-17 06:56:10.575009] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:05.977 [2024-04-17 06:56:10.575394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.977 [2024-04-17 06:56:10.575426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.236 [2024-04-17 06:56:10.593150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.236 [2024-04-17 06:56:10.593581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.236 [2024-04-17 06:56:10.593609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.236 [2024-04-17 06:56:10.608614] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.236 [2024-04-17 06:56:10.608973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.236 [2024-04-17 06:56:10.609001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.236 [2024-04-17 06:56:10.624865] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.236 [2024-04-17 06:56:10.625244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.236 [2024-04-17 06:56:10.625275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.236 [2024-04-17 06:56:10.643231] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.236 [2024-04-17 06:56:10.643640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.236 [2024-04-17 06:56:10.643668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.236 [2024-04-17 06:56:10.660349] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.236 [2024-04-17 06:56:10.660882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.236 [2024-04-17 06:56:10.660909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.236 [2024-04-17 06:56:10.676174] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.236 [2024-04-17 06:56:10.676575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.236 [2024-04-17 06:56:10.676612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.236 [2024-04-17 06:56:10.692471] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.236 [2024-04-17 06:56:10.692907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.236 [2024-04-17 06:56:10.692934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.236 [2024-04-17 06:56:10.709139] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.236 [2024-04-17 06:56:10.709543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.236 [2024-04-17 06:56:10.709577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.236 [2024-04-17 06:56:10.725543] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.236 [2024-04-17 06:56:10.725896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.236 [2024-04-17 06:56:10.725923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.236 [2024-04-17 06:56:10.740854] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.236 [2024-04-17 06:56:10.741254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.236 [2024-04-17 06:56:10.741290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.236 [2024-04-17 06:56:10.756307] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.236 [2024-04-17 06:56:10.756689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.236 [2024-04-17 06:56:10.756726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.236 [2024-04-17 06:56:10.771497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.236 [2024-04-17 06:56:10.771774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.236 [2024-04-17 06:56:10.771801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.236 [2024-04-17 06:56:10.787103] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.236 [2024-04-17 06:56:10.787499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.236 [2024-04-17 06:56:10.787529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.236 [2024-04-17 06:56:10.802787] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.236 [2024-04-17 06:56:10.803320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.236 [2024-04-17 06:56:10.803350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.236 [2024-04-17 06:56:10.818847] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.236 [2024-04-17 06:56:10.819008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.236 [2024-04-17 06:56:10.819036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.236 [2024-04-17 06:56:10.835762] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.236 [2024-04-17 06:56:10.836123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.236 [2024-04-17 06:56:10.836165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.495 [2024-04-17 06:56:10.852982] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.495 [2024-04-17 06:56:10.853381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.495 [2024-04-17 06:56:10.853419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.495 [2024-04-17 06:56:10.869172] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.495 [2024-04-17 06:56:10.869554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.495 [2024-04-17 06:56:10.869607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.495 [2024-04-17 06:56:10.884770] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.495 [2024-04-17 06:56:10.885120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.495 [2024-04-17 06:56:10.885162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.495 [2024-04-17 06:56:10.901807] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.495 [2024-04-17 06:56:10.902202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.495 [2024-04-17 06:56:10.902250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.495 [2024-04-17 06:56:10.917328] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.495 [2024-04-17 06:56:10.917681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.496 [2024-04-17 06:56:10.917709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.496 [2024-04-17 06:56:10.933713] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.496 [2024-04-17 06:56:10.934071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.496 [2024-04-17 06:56:10.934111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.496 [2024-04-17 06:56:10.948675] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.496 [2024-04-17 06:56:10.949019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.496 [2024-04-17 06:56:10.949059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.496 [2024-04-17 06:56:10.964796] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.496 [2024-04-17 06:56:10.965190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.496 [2024-04-17 06:56:10.965219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.496 [2024-04-17 06:56:10.981532] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.496 [2024-04-17 06:56:10.981888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.496 [2024-04-17 06:56:10.981948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.496 [2024-04-17 06:56:10.997938] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.496 [2024-04-17 06:56:10.998331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.496 [2024-04-17 06:56:10.998377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.496 [2024-04-17 06:56:11.014163] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.496 [2024-04-17 06:56:11.014587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.496 [2024-04-17 06:56:11.014617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.496 [2024-04-17 06:56:11.030613] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.496 [2024-04-17 06:56:11.030971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.496 [2024-04-17 06:56:11.031012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.496 [2024-04-17 06:56:11.046837] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.496 [2024-04-17 06:56:11.047289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.496 [2024-04-17 06:56:11.047319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.496 [2024-04-17 06:56:11.063170] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.496 [2024-04-17 06:56:11.063570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.496 [2024-04-17 06:56:11.063599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.496 [2024-04-17 06:56:11.080045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.496 [2024-04-17 06:56:11.080440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.496 [2024-04-17 06:56:11.080485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.496 [2024-04-17 06:56:11.097709] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.496 [2024-04-17 06:56:11.098093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.496 [2024-04-17 06:56:11.098143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.755 [2024-04-17 06:56:11.116888] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.755 [2024-04-17 06:56:11.117287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.755 [2024-04-17 06:56:11.117335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.755 [2024-04-17 06:56:11.133887] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.755 [2024-04-17 06:56:11.134267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.755 [2024-04-17 06:56:11.134321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.755 [2024-04-17 06:56:11.150070] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.755 [2024-04-17 06:56:11.150446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.755 [2024-04-17 06:56:11.150475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.755 [2024-04-17 06:56:11.167144] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.755 [2024-04-17 06:56:11.167523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.755 [2024-04-17 06:56:11.167564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.755 [2024-04-17 06:56:11.184047] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.755 [2024-04-17 06:56:11.184426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.755 [2024-04-17 06:56:11.184479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.755 [2024-04-17 06:56:11.201465] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.755 [2024-04-17 06:56:11.201902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.755 [2024-04-17 06:56:11.201935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.755 [2024-04-17 06:56:11.216669] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.755 [2024-04-17 06:56:11.217107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.755 [2024-04-17 06:56:11.217140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.755 [2024-04-17 06:56:11.232104] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.755 [2024-04-17 06:56:11.232512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.755 [2024-04-17 06:56:11.232553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.755 [2024-04-17 06:56:11.248008] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.755 [2024-04-17 06:56:11.248383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.755 [2024-04-17 06:56:11.248410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.755 [2024-04-17 06:56:11.264032] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.755 [2024-04-17 06:56:11.264433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.755 [2024-04-17 06:56:11.264486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.755 [2024-04-17 06:56:11.280029] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.755 [2024-04-17 06:56:11.280272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.755 [2024-04-17 06:56:11.280314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.755 [2024-04-17 06:56:11.295972] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.755 [2024-04-17 06:56:11.296392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.755 [2024-04-17 06:56:11.296421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.755 [2024-04-17 06:56:11.312235] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.755 [2024-04-17 06:56:11.312639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.755 [2024-04-17 06:56:11.312692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.755 [2024-04-17 06:56:11.328797] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.755 [2024-04-17 06:56:11.329215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.755 [2024-04-17 06:56:11.329245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.755 [2024-04-17 06:56:11.344540] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.755 [2024-04-17 06:56:11.344884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.755 [2024-04-17 06:56:11.344925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.755 [2024-04-17 06:56:11.360363] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:06.755 [2024-04-17 06:56:11.360832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.755 [2024-04-17 06:56:11.360877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:07.015 [2024-04-17 06:56:11.376155] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.015 [2024-04-17 06:56:11.376535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.015 [2024-04-17 06:56:11.376562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:07.015 [2024-04-17 06:56:11.390961] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.015 [2024-04-17 06:56:11.391295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.015 [2024-04-17 06:56:11.391323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:07.015 [2024-04-17 06:56:11.407182] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.015 [2024-04-17 06:56:11.407606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.015 [2024-04-17 06:56:11.407638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:07.015 [2024-04-17 06:56:11.424084] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.015 [2024-04-17 06:56:11.424464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.015 [2024-04-17 06:56:11.424494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:07.015 [2024-04-17 06:56:11.440027] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.015 [2024-04-17 06:56:11.440421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.015 [2024-04-17 06:56:11.440450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:07.015 [2024-04-17 06:56:11.455787] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.015 [2024-04-17 06:56:11.456342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.015 [2024-04-17 06:56:11.456371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:07.015 [2024-04-17 06:56:11.471966] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.015 [2024-04-17 06:56:11.472343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.015 [2024-04-17 06:56:11.472372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:07.015 [2024-04-17 06:56:11.489261] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.015 [2024-04-17 06:56:11.489639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.015 [2024-04-17 06:56:11.489666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:07.015 [2024-04-17 06:56:11.503878] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.015 [2024-04-17 06:56:11.504267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.015 [2024-04-17 06:56:11.504311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:07.015 [2024-04-17 06:56:11.520995] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.015 [2024-04-17 06:56:11.521368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.015 [2024-04-17 06:56:11.521396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:07.015 [2024-04-17 06:56:11.535811] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.015 [2024-04-17 06:56:11.536159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.015 [2024-04-17 06:56:11.536208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:07.015 [2024-04-17 06:56:11.551730] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.015 [2024-04-17 06:56:11.552137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.015 [2024-04-17 06:56:11.552164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:07.015 [2024-04-17 06:56:11.568929] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.015 [2024-04-17 06:56:11.569349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.015 [2024-04-17 06:56:11.569380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:07.015 [2024-04-17 06:56:11.585077] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.015 [2024-04-17 06:56:11.585454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.015 [2024-04-17 06:56:11.585499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:07.015 [2024-04-17 06:56:11.602596] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.015 [2024-04-17 06:56:11.602932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.015 [2024-04-17 06:56:11.602960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:07.015 [2024-04-17 06:56:11.619411] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.015 [2024-04-17 06:56:11.619835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.015 [2024-04-17 06:56:11.619863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:07.275 [2024-04-17 06:56:11.636003] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.275 [2024-04-17 06:56:11.636433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.275 [2024-04-17 06:56:11.636462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:07.275 [2024-04-17 06:56:11.652876] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.275 [2024-04-17 06:56:11.653255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.275 [2024-04-17 06:56:11.653283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:07.275 [2024-04-17 06:56:11.668647] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.275 [2024-04-17 06:56:11.668996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.275 [2024-04-17 06:56:11.669023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:07.275 [2024-04-17 06:56:11.685747] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.275 [2024-04-17 06:56:11.686115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.275 [2024-04-17 06:56:11.686157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:07.275 [2024-04-17 06:56:11.702684] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.275 [2024-04-17 06:56:11.703030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.275 [2024-04-17 06:56:11.703057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:07.275 [2024-04-17 06:56:11.719012] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.275 [2024-04-17 06:56:11.719426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.275 [2024-04-17 06:56:11.719456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:07.275 [2024-04-17 06:56:11.734533] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.275 [2024-04-17 06:56:11.734911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.275 [2024-04-17 06:56:11.734954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:07.275 [2024-04-17 06:56:11.750100] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.275 [2024-04-17 06:56:11.750529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.275 [2024-04-17 06:56:11.750556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:07.275 [2024-04-17 06:56:11.767723] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.275 [2024-04-17 06:56:11.768105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.275 [2024-04-17 06:56:11.768146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:07.275 [2024-04-17 06:56:11.788731] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.275 [2024-04-17 06:56:11.789119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.275 [2024-04-17 06:56:11.789147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:07.275 [2024-04-17 06:56:11.808280] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.275 [2024-04-17 06:56:11.808648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.275 [2024-04-17 06:56:11.808690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:07.275 [2024-04-17 06:56:11.823722] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.275 [2024-04-17 06:56:11.824139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.275 [2024-04-17 06:56:11.824167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:07.275 [2024-04-17 06:56:11.840235] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.275 [2024-04-17 06:56:11.840639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.275 [2024-04-17 06:56:11.840675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:07.275 [2024-04-17 06:56:11.855564] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.275 [2024-04-17 06:56:11.855933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.275 [2024-04-17 06:56:11.855959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:07.275 [2024-04-17 06:56:11.871332] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.275 [2024-04-17 06:56:11.871732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.275 [2024-04-17 06:56:11.871760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:07.535 [2024-04-17 06:56:11.887427] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.535 [2024-04-17 06:56:11.887804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.535 [2024-04-17 06:56:11.887846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:07.535 [2024-04-17 06:56:11.907759] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.535 [2024-04-17 06:56:11.908134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.535 [2024-04-17 06:56:11.908183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:07.535 [2024-04-17 06:56:11.927012] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.535 [2024-04-17 06:56:11.927415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.535 [2024-04-17 06:56:11.927444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:07.535 [2024-04-17 06:56:11.942643] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.535 [2024-04-17 06:56:11.943023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.535 [2024-04-17 06:56:11.943067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:07.535 [2024-04-17 06:56:11.957327] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.535 [2024-04-17 06:56:11.957727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.535 [2024-04-17 06:56:11.957768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:07.535 [2024-04-17 06:56:11.973083] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.535 [2024-04-17 06:56:11.973474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.535 [2024-04-17 06:56:11.973517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:07.535 [2024-04-17 06:56:11.988592] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.535 [2024-04-17 06:56:11.988969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.535 [2024-04-17 06:56:11.988997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:07.535 [2024-04-17 06:56:12.003711] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.535 [2024-04-17 06:56:12.004077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.535 [2024-04-17 06:56:12.004118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:07.536 [2024-04-17 06:56:12.020838] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.536 [2024-04-17 06:56:12.021260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.536 [2024-04-17 06:56:12.021301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:07.536 [2024-04-17 06:56:12.036643] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.536 [2024-04-17 06:56:12.037062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.536 [2024-04-17 06:56:12.037109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:07.536 [2024-04-17 06:56:12.052286] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.536 [2024-04-17 06:56:12.052682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.536 [2024-04-17 06:56:12.052709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:07.536 [2024-04-17 06:56:12.066193] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.536 [2024-04-17 06:56:12.066513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.536 [2024-04-17 06:56:12.066540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:07.536 [2024-04-17 06:56:12.083575] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.536 [2024-04-17 06:56:12.083829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.536 [2024-04-17 06:56:12.083859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:07.536 [2024-04-17 06:56:12.098866] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.536 [2024-04-17 06:56:12.099256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.536 [2024-04-17 06:56:12.099285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:07.536 [2024-04-17 06:56:12.114142] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.536 [2024-04-17 06:56:12.114622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.536 [2024-04-17 06:56:12.114649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:07.536 [2024-04-17 06:56:12.130361] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.536 [2024-04-17 06:56:12.130754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.536 [2024-04-17 06:56:12.130780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:07.795 [2024-04-17 06:56:12.145459] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.795 [2024-04-17 06:56:12.145934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.795 [2024-04-17 06:56:12.145962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:07.795 [2024-04-17 06:56:12.162254] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.795 [2024-04-17 06:56:12.162623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.795 [2024-04-17 06:56:12.162649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:07.795 [2024-04-17 06:56:12.178596] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.795 [2024-04-17 06:56:12.178971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.795 [2024-04-17 06:56:12.178999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:07.795 [2024-04-17 06:56:12.195639] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.795 [2024-04-17 06:56:12.196008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.795 [2024-04-17 06:56:12.196036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:07.795 [2024-04-17 06:56:12.212892] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.795 [2024-04-17 06:56:12.213281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.795 [2024-04-17 06:56:12.213308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:07.795 [2024-04-17 06:56:12.227583] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.795 [2024-04-17 06:56:12.227958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.795 [2024-04-17 06:56:12.228000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:07.795 [2024-04-17 06:56:12.244065] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.795 [2024-04-17 06:56:12.244463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.795 [2024-04-17 06:56:12.244492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:07.795 [2024-04-17 06:56:12.260768] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbff20) with pdu=0x2000190fef90 00:30:07.795 [2024-04-17 06:56:12.261165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.795 [2024-04-17 06:56:12.261217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:07.795 00:30:07.795 Latency(us) 00:30:07.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.795 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:07.795 nvme0n1 : 2.01 1892.73 236.59 0.00 0.00 8430.76 3131.16 22136.60 00:30:07.795 =================================================================================================================== 00:30:07.795 Total : 1892.73 236.59 0.00 0.00 8430.76 3131.16 22136.60 00:30:07.795 0 00:30:07.795 06:56:12 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:07.795 06:56:12 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:07.795 06:56:12 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:07.795 | .driver_specific 00:30:07.795 | .nvme_error 00:30:07.795 | .status_code 00:30:07.795 | .command_transient_transport_error' 00:30:07.795 06:56:12 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:08.054 06:56:12 -- host/digest.sh@71 -- # (( 122 > 0 )) 00:30:08.054 06:56:12 -- host/digest.sh@73 -- # killprocess 116683 00:30:08.054 06:56:12 -- common/autotest_common.sh@936 -- # '[' -z 116683 ']' 00:30:08.054 06:56:12 -- common/autotest_common.sh@940 -- # kill -0 116683 00:30:08.054 06:56:12 -- common/autotest_common.sh@941 -- # uname 00:30:08.054 06:56:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:08.054 06:56:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116683 00:30:08.054 06:56:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:08.054 06:56:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:08.054 06:56:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116683' 00:30:08.054 killing process with pid 116683 00:30:08.054 06:56:12 -- common/autotest_common.sh@955 -- # kill 116683 00:30:08.054 Received shutdown signal, test time was about 2.000000 seconds 00:30:08.054 00:30:08.054 Latency(us) 00:30:08.054 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:08.054 =================================================================================================================== 00:30:08.054 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:08.054 06:56:12 -- common/autotest_common.sh@960 -- # wait 116683 00:30:08.312 06:56:12 -- host/digest.sh@116 -- # killprocess 115327 00:30:08.312 06:56:12 -- common/autotest_common.sh@936 -- # '[' -z 115327 ']' 00:30:08.312 06:56:12 -- common/autotest_common.sh@940 -- # kill -0 115327 00:30:08.312 06:56:12 -- common/autotest_common.sh@941 -- # uname 00:30:08.312 06:56:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:08.312 06:56:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 115327 00:30:08.312 06:56:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:08.312 06:56:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:08.312 06:56:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 115327' 00:30:08.312 killing process with pid 115327 00:30:08.312 06:56:12 -- common/autotest_common.sh@955 -- # kill 115327 00:30:08.312 06:56:12 -- common/autotest_common.sh@960 -- # wait 115327 00:30:08.570 00:30:08.570 real 0m14.932s 00:30:08.570 user 0m29.808s 00:30:08.570 sys 0m3.858s 00:30:08.570 06:56:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:08.570 06:56:12 -- common/autotest_common.sh@10 -- # set +x 00:30:08.570 ************************************ 00:30:08.570 END TEST nvmf_digest_error 00:30:08.570 ************************************ 00:30:08.570 06:56:13 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:30:08.570 06:56:13 -- host/digest.sh@150 -- # nvmftestfini 00:30:08.570 06:56:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:08.570 06:56:13 -- nvmf/common.sh@117 -- # sync 00:30:08.570 06:56:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:08.570 06:56:13 -- nvmf/common.sh@120 -- # set +e 00:30:08.570 06:56:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:08.570 06:56:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:08.570 rmmod nvme_tcp 00:30:08.570 rmmod nvme_fabrics 00:30:08.570 rmmod nvme_keyring 00:30:08.570 06:56:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:08.570 06:56:13 -- nvmf/common.sh@124 -- # set -e 00:30:08.570 06:56:13 -- nvmf/common.sh@125 -- # return 0 00:30:08.570 06:56:13 -- nvmf/common.sh@478 -- # '[' -n 115327 ']' 00:30:08.570 06:56:13 -- nvmf/common.sh@479 -- # killprocess 115327 00:30:08.570 06:56:13 -- common/autotest_common.sh@936 -- # '[' -z 115327 ']' 00:30:08.570 06:56:13 -- common/autotest_common.sh@940 -- # kill -0 115327 00:30:08.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (115327) - No such process 00:30:08.570 06:56:13 -- common/autotest_common.sh@963 -- # echo 'Process with pid 115327 is not found' 00:30:08.570 Process with pid 115327 is not found 00:30:08.570 06:56:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:30:08.570 06:56:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:08.570 06:56:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:08.570 06:56:13 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:08.570 06:56:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:08.570 06:56:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.570 06:56:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:08.570 06:56:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.111 06:56:15 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:11.111 00:30:11.111 real 0m34.828s 00:30:11.111 user 0m59.773s 00:30:11.111 sys 0m9.734s 00:30:11.111 06:56:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:11.111 06:56:15 -- common/autotest_common.sh@10 -- # set +x 00:30:11.111 ************************************ 00:30:11.111 END TEST nvmf_digest 00:30:11.111 ************************************ 00:30:11.111 06:56:15 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:30:11.111 06:56:15 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:30:11.111 06:56:15 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:30:11.111 06:56:15 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:11.111 06:56:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:11.111 06:56:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:11.111 06:56:15 -- common/autotest_common.sh@10 -- # set +x 00:30:11.111 ************************************ 00:30:11.111 START TEST nvmf_bdevperf 00:30:11.111 ************************************ 00:30:11.111 06:56:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:11.111 * Looking for test storage... 00:30:11.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:11.111 06:56:15 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:11.111 06:56:15 -- nvmf/common.sh@7 -- # uname -s 00:30:11.111 06:56:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:11.111 06:56:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:11.111 06:56:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:11.111 06:56:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:11.111 06:56:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:11.111 06:56:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:11.111 06:56:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:11.111 06:56:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:11.111 06:56:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:11.111 06:56:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:11.111 06:56:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:11.111 06:56:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:11.111 06:56:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:11.111 06:56:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:11.111 06:56:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:11.111 06:56:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:11.111 06:56:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:11.111 06:56:15 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:11.111 06:56:15 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:11.111 06:56:15 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:11.111 06:56:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.111 06:56:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.111 06:56:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.111 06:56:15 -- paths/export.sh@5 -- # export PATH 00:30:11.111 06:56:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.111 06:56:15 -- nvmf/common.sh@47 -- # : 0 00:30:11.111 06:56:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:11.111 06:56:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:11.111 06:56:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:11.111 06:56:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:11.111 06:56:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:11.111 06:56:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:11.111 06:56:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:11.111 06:56:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:11.111 06:56:15 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:11.111 06:56:15 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:11.111 06:56:15 -- host/bdevperf.sh@24 -- # nvmftestinit 00:30:11.111 06:56:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:30:11.111 06:56:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:11.111 06:56:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:30:11.111 06:56:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:30:11.111 06:56:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:30:11.111 06:56:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.111 06:56:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:11.111 06:56:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.111 06:56:15 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:30:11.111 06:56:15 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:30:11.111 06:56:15 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:11.111 06:56:15 -- common/autotest_common.sh@10 -- # set +x 00:30:13.012 06:56:17 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:13.012 06:56:17 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:13.012 06:56:17 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:13.012 06:56:17 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:13.012 06:56:17 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:13.012 06:56:17 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:13.012 06:56:17 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:13.012 06:56:17 -- nvmf/common.sh@295 -- # net_devs=() 00:30:13.012 06:56:17 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:13.012 06:56:17 -- nvmf/common.sh@296 -- # e810=() 00:30:13.012 06:56:17 -- nvmf/common.sh@296 -- # local -ga e810 00:30:13.012 06:56:17 -- nvmf/common.sh@297 -- # x722=() 00:30:13.012 06:56:17 -- nvmf/common.sh@297 -- # local -ga x722 00:30:13.012 06:56:17 -- nvmf/common.sh@298 -- # mlx=() 00:30:13.012 06:56:17 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:13.012 06:56:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:13.012 06:56:17 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:13.012 06:56:17 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:13.012 06:56:17 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:13.012 06:56:17 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:13.012 06:56:17 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:13.012 06:56:17 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:13.012 06:56:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:13.012 06:56:17 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:13.012 06:56:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:13.012 06:56:17 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:13.012 06:56:17 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:13.012 06:56:17 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:13.012 06:56:17 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:13.012 06:56:17 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:13.012 06:56:17 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:13.012 06:56:17 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:13.012 06:56:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:13.012 06:56:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:13.012 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:13.012 06:56:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:13.012 06:56:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:13.012 06:56:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.012 06:56:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.012 06:56:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:13.012 06:56:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:13.012 06:56:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:13.012 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:13.013 06:56:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:13.013 06:56:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:13.013 06:56:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.013 06:56:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.013 06:56:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:13.013 06:56:17 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:13.013 06:56:17 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:13.013 06:56:17 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:13.013 06:56:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:13.013 06:56:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.013 06:56:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:13.013 06:56:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.013 06:56:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:13.013 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:13.013 06:56:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.013 06:56:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:13.013 06:56:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.013 06:56:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:13.013 06:56:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.013 06:56:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:13.013 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:13.013 06:56:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.013 06:56:17 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:30:13.013 06:56:17 -- nvmf/common.sh@403 -- # is_hw=yes 00:30:13.013 06:56:17 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:30:13.013 06:56:17 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:30:13.013 06:56:17 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:30:13.013 06:56:17 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:13.013 06:56:17 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:13.013 06:56:17 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:13.013 06:56:17 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:13.013 06:56:17 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:13.013 06:56:17 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:13.013 06:56:17 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:13.013 06:56:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:13.013 06:56:17 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:13.013 06:56:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:13.013 06:56:17 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:13.013 06:56:17 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:13.013 06:56:17 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:13.013 06:56:17 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:13.013 06:56:17 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:13.013 06:56:17 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:13.013 06:56:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:13.013 06:56:17 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:13.013 06:56:17 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:13.013 06:56:17 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:13.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:13.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:30:13.013 00:30:13.013 --- 10.0.0.2 ping statistics --- 00:30:13.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.013 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:30:13.013 06:56:17 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:13.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:13.013 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:30:13.013 00:30:13.013 --- 10.0.0.1 ping statistics --- 00:30:13.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.013 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:30:13.013 06:56:17 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:13.013 06:56:17 -- nvmf/common.sh@411 -- # return 0 00:30:13.013 06:56:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:30:13.013 06:56:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:13.013 06:56:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:30:13.013 06:56:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:30:13.013 06:56:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:13.013 06:56:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:30:13.013 06:56:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:30:13.013 06:56:17 -- host/bdevperf.sh@25 -- # tgt_init 00:30:13.013 06:56:17 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:13.013 06:56:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:30:13.013 06:56:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:13.013 06:56:17 -- common/autotest_common.sh@10 -- # set +x 00:30:13.013 06:56:17 -- nvmf/common.sh@470 -- # nvmfpid=119101 00:30:13.013 06:56:17 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:13.013 06:56:17 -- nvmf/common.sh@471 -- # waitforlisten 119101 00:30:13.013 06:56:17 -- common/autotest_common.sh@817 -- # '[' -z 119101 ']' 00:30:13.013 06:56:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.013 06:56:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:13.013 06:56:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.013 06:56:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:13.013 06:56:17 -- common/autotest_common.sh@10 -- # set +x 00:30:13.013 [2024-04-17 06:56:17.471605] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:30:13.013 [2024-04-17 06:56:17.471692] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:13.013 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.013 [2024-04-17 06:56:17.542661] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:13.272 [2024-04-17 06:56:17.638617] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:13.272 [2024-04-17 06:56:17.638679] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:13.272 [2024-04-17 06:56:17.638695] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:13.272 [2024-04-17 06:56:17.638709] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:13.272 [2024-04-17 06:56:17.638721] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:13.272 [2024-04-17 06:56:17.638850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:13.272 [2024-04-17 06:56:17.640198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:13.272 [2024-04-17 06:56:17.640209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.272 06:56:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:13.272 06:56:17 -- common/autotest_common.sh@850 -- # return 0 00:30:13.272 06:56:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:30:13.272 06:56:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:13.272 06:56:17 -- common/autotest_common.sh@10 -- # set +x 00:30:13.272 06:56:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:13.272 06:56:17 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:13.272 06:56:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:13.272 06:56:17 -- common/autotest_common.sh@10 -- # set +x 00:30:13.272 [2024-04-17 06:56:17.789168] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:13.272 06:56:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:13.272 06:56:17 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:13.272 06:56:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:13.272 06:56:17 -- common/autotest_common.sh@10 -- # set +x 00:30:13.272 Malloc0 00:30:13.272 06:56:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:13.272 06:56:17 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:13.272 06:56:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:13.272 06:56:17 -- common/autotest_common.sh@10 -- # set +x 00:30:13.272 06:56:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:13.272 06:56:17 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:13.272 06:56:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:13.272 06:56:17 -- common/autotest_common.sh@10 -- # set +x 00:30:13.272 06:56:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:13.272 06:56:17 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:13.272 06:56:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:13.272 06:56:17 -- common/autotest_common.sh@10 -- # set +x 00:30:13.272 [2024-04-17 06:56:17.852304] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:13.272 06:56:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:13.272 06:56:17 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:30:13.272 06:56:17 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:30:13.272 06:56:17 -- nvmf/common.sh@521 -- # config=() 00:30:13.272 06:56:17 -- nvmf/common.sh@521 -- # local subsystem config 00:30:13.272 06:56:17 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:30:13.272 06:56:17 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:30:13.272 { 00:30:13.272 "params": { 00:30:13.272 "name": "Nvme$subsystem", 00:30:13.272 "trtype": "$TEST_TRANSPORT", 00:30:13.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:13.272 "adrfam": "ipv4", 00:30:13.272 "trsvcid": "$NVMF_PORT", 00:30:13.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:13.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:13.272 "hdgst": ${hdgst:-false}, 00:30:13.272 "ddgst": ${ddgst:-false} 00:30:13.272 }, 00:30:13.272 "method": "bdev_nvme_attach_controller" 00:30:13.272 } 00:30:13.272 EOF 00:30:13.272 )") 00:30:13.272 06:56:17 -- nvmf/common.sh@543 -- # cat 00:30:13.272 06:56:17 -- nvmf/common.sh@545 -- # jq . 00:30:13.272 06:56:17 -- nvmf/common.sh@546 -- # IFS=, 00:30:13.272 06:56:17 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:30:13.272 "params": { 00:30:13.272 "name": "Nvme1", 00:30:13.272 "trtype": "tcp", 00:30:13.272 "traddr": "10.0.0.2", 00:30:13.272 "adrfam": "ipv4", 00:30:13.272 "trsvcid": "4420", 00:30:13.272 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:13.272 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:13.272 "hdgst": false, 00:30:13.272 "ddgst": false 00:30:13.272 }, 00:30:13.272 "method": "bdev_nvme_attach_controller" 00:30:13.272 }' 00:30:13.531 [2024-04-17 06:56:17.900237] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:30:13.531 [2024-04-17 06:56:17.900325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119186 ] 00:30:13.531 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.531 [2024-04-17 06:56:17.960429] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.531 [2024-04-17 06:56:18.048912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.531 [2024-04-17 06:56:18.057700] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:30:13.790 Running I/O for 1 seconds... 00:30:15.169 00:30:15.169 Latency(us) 00:30:15.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:15.169 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:15.169 Verification LBA range: start 0x0 length 0x4000 00:30:15.169 Nvme1n1 : 1.01 8146.75 31.82 0.00 0.00 15642.07 2597.17 19320.98 00:30:15.169 =================================================================================================================== 00:30:15.169 Total : 8146.75 31.82 0.00 0.00 15642.07 2597.17 19320.98 00:30:15.169 06:56:19 -- host/bdevperf.sh@30 -- # bdevperfpid=119331 00:30:15.169 06:56:19 -- host/bdevperf.sh@32 -- # sleep 3 00:30:15.169 06:56:19 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:30:15.169 06:56:19 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:30:15.169 06:56:19 -- nvmf/common.sh@521 -- # config=() 00:30:15.169 06:56:19 -- nvmf/common.sh@521 -- # local subsystem config 00:30:15.169 06:56:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:30:15.169 06:56:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:30:15.169 { 00:30:15.169 "params": { 00:30:15.169 "name": "Nvme$subsystem", 00:30:15.169 "trtype": "$TEST_TRANSPORT", 00:30:15.169 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.169 "adrfam": "ipv4", 00:30:15.169 "trsvcid": "$NVMF_PORT", 00:30:15.169 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.169 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.169 "hdgst": ${hdgst:-false}, 00:30:15.169 "ddgst": ${ddgst:-false} 00:30:15.169 }, 00:30:15.169 "method": "bdev_nvme_attach_controller" 00:30:15.169 } 00:30:15.169 EOF 00:30:15.169 )") 00:30:15.169 06:56:19 -- nvmf/common.sh@543 -- # cat 00:30:15.169 06:56:19 -- nvmf/common.sh@545 -- # jq . 00:30:15.169 06:56:19 -- nvmf/common.sh@546 -- # IFS=, 00:30:15.169 06:56:19 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:30:15.169 "params": { 00:30:15.169 "name": "Nvme1", 00:30:15.169 "trtype": "tcp", 00:30:15.169 "traddr": "10.0.0.2", 00:30:15.169 "adrfam": "ipv4", 00:30:15.169 "trsvcid": "4420", 00:30:15.169 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:15.169 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:15.169 "hdgst": false, 00:30:15.169 "ddgst": false 00:30:15.169 }, 00:30:15.169 "method": "bdev_nvme_attach_controller" 00:30:15.169 }' 00:30:15.169 [2024-04-17 06:56:19.643266] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:30:15.169 [2024-04-17 06:56:19.643346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119331 ] 00:30:15.169 EAL: No free 2048 kB hugepages reported on node 1 00:30:15.169 [2024-04-17 06:56:19.706952] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.427 [2024-04-17 06:56:19.791937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.427 [2024-04-17 06:56:19.800668] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:30:15.427 Running I/O for 15 seconds... 00:30:18.712 06:56:22 -- host/bdevperf.sh@33 -- # kill -9 119101 00:30:18.712 06:56:22 -- host/bdevperf.sh@35 -- # sleep 3 00:30:18.712 [2024-04-17 06:56:22.611506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:45928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.712 [2024-04-17 06:56:22.611562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.712 [2024-04-17 06:56:22.611604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:45936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.712 [2024-04-17 06:56:22.611640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.712 [2024-04-17 06:56:22.611658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:45944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.712 [2024-04-17 06:56:22.611672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.712 [2024-04-17 06:56:22.611687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:45952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.712 [2024-04-17 06:56:22.611702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.712 [2024-04-17 06:56:22.611733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:45960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.712 [2024-04-17 06:56:22.611747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.712 [2024-04-17 06:56:22.611763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:45968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.712 [2024-04-17 06:56:22.611777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.712 [2024-04-17 06:56:22.611791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:45976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.712 [2024-04-17 06:56:22.611804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.611834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:45984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.611846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.611860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:45992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.611874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.611888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.611902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.611917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:46008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.611929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.611945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.611957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.611971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:46024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.611998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:46032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:46040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:46048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:46064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:46072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:46080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:46088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:46096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:46120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:46152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:46160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:46168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:46176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:46184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:46192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:46200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:46216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:46240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.713 [2024-04-17 06:56:22.612977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.713 [2024-04-17 06:56:22.612989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:46344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:46352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:46360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:46448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.613976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.613988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.614002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.614014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.614028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.614040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.614054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.614066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.614080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.614092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.614106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.714 [2024-04-17 06:56:22.614118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.714 [2024-04-17 06:56:22.614132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.614144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.614172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.614194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.614209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.614223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.614241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.614255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.614270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.614283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.614297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.614310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.614325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.614338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.614353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.614365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.614380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.614393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.614408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.614421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.614436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.614448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.614479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.614492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.614506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.614518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.614545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.614557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.614571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:46728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.614582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.614596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.614611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.614625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:46744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.614638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.614652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:46752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.614664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.614677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:46760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.614689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.614702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:46768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.614714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.614727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.614739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.614752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.614778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.614793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:46792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.614804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.614817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:46800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.614829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.614842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:46808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.614853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.614866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:46816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.614877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.614890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.614901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.614914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:46832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.614927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.614941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:46840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.614956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.614970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:46848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.614982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.614995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:46856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.615007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.615020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.615031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.615044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.615056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.615069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:46880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.615080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.615093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.615105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.615117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:46896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.615129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.615142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:46904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.615154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.615190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:46912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.615205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.615220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:46920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.615234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.715 [2024-04-17 06:56:22.615249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:46928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.715 [2024-04-17 06:56:22.615262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.716 [2024-04-17 06:56:22.615277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.716 [2024-04-17 06:56:22.615291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.716 [2024-04-17 06:56:22.615309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf92f80 is same with the state(5) to be set 00:30:18.716 [2024-04-17 06:56:22.615325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.716 [2024-04-17 06:56:22.615337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.716 [2024-04-17 06:56:22.615348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46944 len:8 PRP1 0x0 PRP2 0x0 00:30:18.716 [2024-04-17 06:56:22.615361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.716 [2024-04-17 06:56:22.615422] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf92f80 was disconnected and freed. reset controller. 00:30:18.716 [2024-04-17 06:56:22.615504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.716 [2024-04-17 06:56:22.615524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.716 [2024-04-17 06:56:22.615539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.716 [2024-04-17 06:56:22.615551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.716 [2024-04-17 06:56:22.615563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.716 [2024-04-17 06:56:22.615575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.716 [2024-04-17 06:56:22.615588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.716 [2024-04-17 06:56:22.615600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.716 [2024-04-17 06:56:22.615612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.716 [2024-04-17 06:56:22.618846] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.716 [2024-04-17 06:56:22.618880] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.716 [2024-04-17 06:56:22.619439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.716 [2024-04-17 06:56:22.619643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.716 [2024-04-17 06:56:22.619668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.716 [2024-04-17 06:56:22.619684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.716 [2024-04-17 06:56:22.619936] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.716 [2024-04-17 06:56:22.620128] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.716 [2024-04-17 06:56:22.620145] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.716 [2024-04-17 06:56:22.620184] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.716 [2024-04-17 06:56:22.623700] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.716 [2024-04-17 06:56:22.632841] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.716 [2024-04-17 06:56:22.633218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.716 [2024-04-17 06:56:22.633412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.716 [2024-04-17 06:56:22.633438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.716 [2024-04-17 06:56:22.633454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.716 [2024-04-17 06:56:22.633713] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.716 [2024-04-17 06:56:22.633904] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.716 [2024-04-17 06:56:22.633922] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.716 [2024-04-17 06:56:22.633935] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.716 [2024-04-17 06:56:22.637437] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.716 [2024-04-17 06:56:22.646805] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.716 [2024-04-17 06:56:22.647318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.716 [2024-04-17 06:56:22.647540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.716 [2024-04-17 06:56:22.647565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.716 [2024-04-17 06:56:22.647580] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.716 [2024-04-17 06:56:22.647828] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.716 [2024-04-17 06:56:22.648019] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.716 [2024-04-17 06:56:22.648038] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.716 [2024-04-17 06:56:22.648050] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.716 [2024-04-17 06:56:22.651592] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.716 [2024-04-17 06:56:22.660736] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.716 [2024-04-17 06:56:22.661211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.716 [2024-04-17 06:56:22.661404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.716 [2024-04-17 06:56:22.661429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.716 [2024-04-17 06:56:22.661444] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.716 [2024-04-17 06:56:22.661681] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.716 [2024-04-17 06:56:22.661879] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.716 [2024-04-17 06:56:22.661898] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.716 [2024-04-17 06:56:22.661910] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.716 [2024-04-17 06:56:22.665434] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.716 [2024-04-17 06:56:22.674671] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.716 [2024-04-17 06:56:22.675116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.716 [2024-04-17 06:56:22.675314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.716 [2024-04-17 06:56:22.675341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.716 [2024-04-17 06:56:22.675362] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.716 [2024-04-17 06:56:22.675604] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.716 [2024-04-17 06:56:22.675812] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.716 [2024-04-17 06:56:22.675830] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.716 [2024-04-17 06:56:22.675842] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.716 [2024-04-17 06:56:22.679325] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.716 [2024-04-17 06:56:22.688497] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.716 [2024-04-17 06:56:22.689097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.716 [2024-04-17 06:56:22.689478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.716 [2024-04-17 06:56:22.689507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.716 [2024-04-17 06:56:22.689538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.716 [2024-04-17 06:56:22.689766] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.716 [2024-04-17 06:56:22.689959] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.716 [2024-04-17 06:56:22.689977] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.716 [2024-04-17 06:56:22.689989] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.716 [2024-04-17 06:56:22.693489] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.716 [2024-04-17 06:56:22.702418] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.716 [2024-04-17 06:56:22.703020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.716 [2024-04-17 06:56:22.703275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.716 [2024-04-17 06:56:22.703304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.716 [2024-04-17 06:56:22.703320] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.716 [2024-04-17 06:56:22.703565] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.716 [2024-04-17 06:56:22.703773] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.716 [2024-04-17 06:56:22.703792] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.716 [2024-04-17 06:56:22.703804] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.716 [2024-04-17 06:56:22.707277] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.716 [2024-04-17 06:56:22.716250] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.716 [2024-04-17 06:56:22.716696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.716 [2024-04-17 06:56:22.716914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.716 [2024-04-17 06:56:22.716940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.716 [2024-04-17 06:56:22.716955] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.717 [2024-04-17 06:56:22.717235] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.717 [2024-04-17 06:56:22.717439] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.717 [2024-04-17 06:56:22.717458] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.717 [2024-04-17 06:56:22.717486] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.717 [2024-04-17 06:56:22.720964] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.717 [2024-04-17 06:56:22.730124] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.717 [2024-04-17 06:56:22.730662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.717 [2024-04-17 06:56:22.730839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.717 [2024-04-17 06:56:22.730865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.717 [2024-04-17 06:56:22.730880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.717 [2024-04-17 06:56:22.731117] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.717 [2024-04-17 06:56:22.731356] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.717 [2024-04-17 06:56:22.731377] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.717 [2024-04-17 06:56:22.731390] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.717 [2024-04-17 06:56:22.734909] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.717 [2024-04-17 06:56:22.744078] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.717 [2024-04-17 06:56:22.744497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.717 [2024-04-17 06:56:22.744708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.717 [2024-04-17 06:56:22.744734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.717 [2024-04-17 06:56:22.744750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.717 [2024-04-17 06:56:22.745004] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.717 [2024-04-17 06:56:22.745238] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.717 [2024-04-17 06:56:22.745273] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.717 [2024-04-17 06:56:22.745286] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.717 [2024-04-17 06:56:22.748789] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.717 [2024-04-17 06:56:22.757937] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.717 [2024-04-17 06:56:22.758396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.717 [2024-04-17 06:56:22.758622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.717 [2024-04-17 06:56:22.758648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.717 [2024-04-17 06:56:22.758663] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.717 [2024-04-17 06:56:22.758906] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.717 [2024-04-17 06:56:22.759102] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.717 [2024-04-17 06:56:22.759121] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.717 [2024-04-17 06:56:22.759133] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.717 [2024-04-17 06:56:22.762659] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.717 [2024-04-17 06:56:22.771783] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.717 [2024-04-17 06:56:22.772296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.717 [2024-04-17 06:56:22.772484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.717 [2024-04-17 06:56:22.772510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.717 [2024-04-17 06:56:22.772525] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.717 [2024-04-17 06:56:22.772774] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.717 [2024-04-17 06:56:22.772966] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.717 [2024-04-17 06:56:22.772984] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.717 [2024-04-17 06:56:22.772996] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.717 [2024-04-17 06:56:22.776514] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.717 [2024-04-17 06:56:22.785651] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.717 [2024-04-17 06:56:22.786048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.717 [2024-04-17 06:56:22.786276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.717 [2024-04-17 06:56:22.786302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.717 [2024-04-17 06:56:22.786318] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.717 [2024-04-17 06:56:22.786557] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.717 [2024-04-17 06:56:22.786764] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.717 [2024-04-17 06:56:22.786782] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.717 [2024-04-17 06:56:22.786794] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.717 [2024-04-17 06:56:22.790269] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.717 [2024-04-17 06:56:22.799661] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.717 [2024-04-17 06:56:22.800088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.717 [2024-04-17 06:56:22.800272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.717 [2024-04-17 06:56:22.800299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.717 [2024-04-17 06:56:22.800315] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.717 [2024-04-17 06:56:22.800552] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.717 [2024-04-17 06:56:22.800759] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.717 [2024-04-17 06:56:22.800782] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.717 [2024-04-17 06:56:22.800794] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.717 [2024-04-17 06:56:22.804271] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.717 [2024-04-17 06:56:22.813629] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.717 [2024-04-17 06:56:22.814086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.717 [2024-04-17 06:56:22.814318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.717 [2024-04-17 06:56:22.814344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.717 [2024-04-17 06:56:22.814360] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.717 [2024-04-17 06:56:22.814624] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.717 [2024-04-17 06:56:22.814815] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.717 [2024-04-17 06:56:22.814834] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.717 [2024-04-17 06:56:22.814846] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.718 [2024-04-17 06:56:22.818315] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.718 [2024-04-17 06:56:22.827468] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.718 [2024-04-17 06:56:22.828065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.718 [2024-04-17 06:56:22.828328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.718 [2024-04-17 06:56:22.828356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.718 [2024-04-17 06:56:22.828372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.718 [2024-04-17 06:56:22.828617] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.718 [2024-04-17 06:56:22.828809] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.718 [2024-04-17 06:56:22.828828] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.718 [2024-04-17 06:56:22.828840] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.718 [2024-04-17 06:56:22.832309] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.718 [2024-04-17 06:56:22.841260] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.718 [2024-04-17 06:56:22.841786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.718 [2024-04-17 06:56:22.841976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.718 [2024-04-17 06:56:22.842004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.718 [2024-04-17 06:56:22.842019] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.718 [2024-04-17 06:56:22.842292] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.718 [2024-04-17 06:56:22.842511] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.718 [2024-04-17 06:56:22.842530] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.718 [2024-04-17 06:56:22.842547] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.718 [2024-04-17 06:56:22.846028] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.718 [2024-04-17 06:56:22.855200] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.718 [2024-04-17 06:56:22.855645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.718 [2024-04-17 06:56:22.855863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.718 [2024-04-17 06:56:22.855889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.718 [2024-04-17 06:56:22.855905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.718 [2024-04-17 06:56:22.856154] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.718 [2024-04-17 06:56:22.856394] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.718 [2024-04-17 06:56:22.856414] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.718 [2024-04-17 06:56:22.856427] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.718 [2024-04-17 06:56:22.859916] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.718 [2024-04-17 06:56:22.869087] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.718 [2024-04-17 06:56:22.869532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.718 [2024-04-17 06:56:22.869663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.718 [2024-04-17 06:56:22.869689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.718 [2024-04-17 06:56:22.869704] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.718 [2024-04-17 06:56:22.869946] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.718 [2024-04-17 06:56:22.870163] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.718 [2024-04-17 06:56:22.870194] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.718 [2024-04-17 06:56:22.870208] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.718 [2024-04-17 06:56:22.873720] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.718 [2024-04-17 06:56:22.882988] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.718 [2024-04-17 06:56:22.883436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.718 [2024-04-17 06:56:22.883629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.718 [2024-04-17 06:56:22.883656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.718 [2024-04-17 06:56:22.883672] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.718 [2024-04-17 06:56:22.883885] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.718 [2024-04-17 06:56:22.884102] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.718 [2024-04-17 06:56:22.884123] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.718 [2024-04-17 06:56:22.884136] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.718 [2024-04-17 06:56:22.887661] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.718 [2024-04-17 06:56:22.896787] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.718 [2024-04-17 06:56:22.897235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.718 [2024-04-17 06:56:22.897419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.718 [2024-04-17 06:56:22.897445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.718 [2024-04-17 06:56:22.897460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.718 [2024-04-17 06:56:22.897710] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.718 [2024-04-17 06:56:22.897901] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.718 [2024-04-17 06:56:22.897920] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.718 [2024-04-17 06:56:22.897932] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.718 [2024-04-17 06:56:22.901442] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.718 [2024-04-17 06:56:22.910777] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.718 [2024-04-17 06:56:22.911219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.718 [2024-04-17 06:56:22.911388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.718 [2024-04-17 06:56:22.911414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.718 [2024-04-17 06:56:22.911429] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.718 [2024-04-17 06:56:22.911666] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.718 [2024-04-17 06:56:22.911872] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.718 [2024-04-17 06:56:22.911891] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.718 [2024-04-17 06:56:22.911902] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.718 [2024-04-17 06:56:22.915408] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.718 [2024-04-17 06:56:22.924737] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.718 [2024-04-17 06:56:22.925166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.718 [2024-04-17 06:56:22.925364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.718 [2024-04-17 06:56:22.925392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.718 [2024-04-17 06:56:22.925407] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.718 [2024-04-17 06:56:22.925644] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.718 [2024-04-17 06:56:22.925885] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.718 [2024-04-17 06:56:22.925908] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.718 [2024-04-17 06:56:22.925923] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.718 [2024-04-17 06:56:22.929449] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.718 [2024-04-17 06:56:22.938580] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.718 [2024-04-17 06:56:22.939187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.718 [2024-04-17 06:56:22.939464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.718 [2024-04-17 06:56:22.939493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.718 [2024-04-17 06:56:22.939509] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.718 [2024-04-17 06:56:22.939755] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.718 [2024-04-17 06:56:22.939948] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.718 [2024-04-17 06:56:22.939966] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.718 [2024-04-17 06:56:22.939978] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.718 [2024-04-17 06:56:22.943499] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.718 [2024-04-17 06:56:22.952426] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.718 [2024-04-17 06:56:22.952818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.718 [2024-04-17 06:56:22.952996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.719 [2024-04-17 06:56:22.953022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.719 [2024-04-17 06:56:22.953038] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.719 [2024-04-17 06:56:22.953317] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.719 [2024-04-17 06:56:22.953538] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.719 [2024-04-17 06:56:22.953556] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.719 [2024-04-17 06:56:22.953568] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.719 [2024-04-17 06:56:22.957029] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.719 [2024-04-17 06:56:22.966400] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.719 [2024-04-17 06:56:22.966844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.719 [2024-04-17 06:56:22.967048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.719 [2024-04-17 06:56:22.967074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.719 [2024-04-17 06:56:22.967090] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.719 [2024-04-17 06:56:22.967367] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.719 [2024-04-17 06:56:22.967584] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.719 [2024-04-17 06:56:22.967603] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.719 [2024-04-17 06:56:22.967614] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.719 [2024-04-17 06:56:22.971077] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.719 [2024-04-17 06:56:22.980261] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.719 [2024-04-17 06:56:22.980944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.719 [2024-04-17 06:56:22.981245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.719 [2024-04-17 06:56:22.981275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.719 [2024-04-17 06:56:22.981291] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.719 [2024-04-17 06:56:22.981556] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.719 [2024-04-17 06:56:22.981749] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.719 [2024-04-17 06:56:22.981767] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.719 [2024-04-17 06:56:22.981779] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.719 [2024-04-17 06:56:22.985251] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.719 [2024-04-17 06:56:22.994200] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.719 [2024-04-17 06:56:22.994873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.719 [2024-04-17 06:56:22.995073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.719 [2024-04-17 06:56:22.995100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.719 [2024-04-17 06:56:22.995116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.719 [2024-04-17 06:56:22.995385] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.719 [2024-04-17 06:56:22.995615] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.719 [2024-04-17 06:56:22.995633] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.719 [2024-04-17 06:56:22.995645] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.719 [2024-04-17 06:56:22.999112] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.719 [2024-04-17 06:56:23.008071] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.719 [2024-04-17 06:56:23.008540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.719 [2024-04-17 06:56:23.008731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.719 [2024-04-17 06:56:23.008771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.719 [2024-04-17 06:56:23.008787] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.719 [2024-04-17 06:56:23.009021] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.719 [2024-04-17 06:56:23.009239] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.719 [2024-04-17 06:56:23.009274] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.719 [2024-04-17 06:56:23.009287] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.719 [2024-04-17 06:56:23.012804] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.719 [2024-04-17 06:56:23.021963] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.719 [2024-04-17 06:56:23.022432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.719 [2024-04-17 06:56:23.022596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.719 [2024-04-17 06:56:23.022627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.719 [2024-04-17 06:56:23.022644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.719 [2024-04-17 06:56:23.022910] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.719 [2024-04-17 06:56:23.023103] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.719 [2024-04-17 06:56:23.023121] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.719 [2024-04-17 06:56:23.023133] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.719 [2024-04-17 06:56:23.026682] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.719 [2024-04-17 06:56:23.035849] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.719 [2024-04-17 06:56:23.036384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.719 [2024-04-17 06:56:23.036533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.719 [2024-04-17 06:56:23.036559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.719 [2024-04-17 06:56:23.036574] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.719 [2024-04-17 06:56:23.036803] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.719 [2024-04-17 06:56:23.037027] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.719 [2024-04-17 06:56:23.037047] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.719 [2024-04-17 06:56:23.037075] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.719 [2024-04-17 06:56:23.040651] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.719 [2024-04-17 06:56:23.049841] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.719 [2024-04-17 06:56:23.050415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.719 [2024-04-17 06:56:23.050667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.719 [2024-04-17 06:56:23.050694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.719 [2024-04-17 06:56:23.050710] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.719 [2024-04-17 06:56:23.050949] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.719 [2024-04-17 06:56:23.051146] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.719 [2024-04-17 06:56:23.051164] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.719 [2024-04-17 06:56:23.051182] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.719 [2024-04-17 06:56:23.054666] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.719 [2024-04-17 06:56:23.063842] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.719 [2024-04-17 06:56:23.064339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.719 [2024-04-17 06:56:23.064529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.719 [2024-04-17 06:56:23.064563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.719 [2024-04-17 06:56:23.064584] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.719 [2024-04-17 06:56:23.064835] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.719 [2024-04-17 06:56:23.065033] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.719 [2024-04-17 06:56:23.065052] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.719 [2024-04-17 06:56:23.065064] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.719 [2024-04-17 06:56:23.068527] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.719 [2024-04-17 06:56:23.077642] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.719 [2024-04-17 06:56:23.078075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.719 [2024-04-17 06:56:23.078291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.719 [2024-04-17 06:56:23.078318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.719 [2024-04-17 06:56:23.078333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.719 [2024-04-17 06:56:23.078585] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.719 [2024-04-17 06:56:23.078824] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.719 [2024-04-17 06:56:23.078848] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.720 [2024-04-17 06:56:23.078863] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.720 [2024-04-17 06:56:23.082394] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.720 [2024-04-17 06:56:23.091505] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.720 [2024-04-17 06:56:23.091955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.720 [2024-04-17 06:56:23.092324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.720 [2024-04-17 06:56:23.092351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.720 [2024-04-17 06:56:23.092367] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.720 [2024-04-17 06:56:23.092608] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.720 [2024-04-17 06:56:23.092845] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.720 [2024-04-17 06:56:23.092868] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.720 [2024-04-17 06:56:23.092883] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.720 [2024-04-17 06:56:23.096432] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.720 [2024-04-17 06:56:23.105380] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.720 [2024-04-17 06:56:23.105818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.720 [2024-04-17 06:56:23.105989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.720 [2024-04-17 06:56:23.106014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.720 [2024-04-17 06:56:23.106029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.720 [2024-04-17 06:56:23.106306] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.720 [2024-04-17 06:56:23.106505] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.720 [2024-04-17 06:56:23.106538] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.720 [2024-04-17 06:56:23.106550] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.720 [2024-04-17 06:56:23.110015] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.720 [2024-04-17 06:56:23.119217] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.720 [2024-04-17 06:56:23.119792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.720 [2024-04-17 06:56:23.120010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.720 [2024-04-17 06:56:23.120035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.720 [2024-04-17 06:56:23.120052] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.720 [2024-04-17 06:56:23.120273] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.720 [2024-04-17 06:56:23.120490] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.720 [2024-04-17 06:56:23.120510] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.720 [2024-04-17 06:56:23.120524] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.720 [2024-04-17 06:56:23.124044] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.720 [2024-04-17 06:56:23.133028] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.720 [2024-04-17 06:56:23.133459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.720 [2024-04-17 06:56:23.133690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.720 [2024-04-17 06:56:23.133714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.720 [2024-04-17 06:56:23.133729] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.720 [2024-04-17 06:56:23.133973] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.720 [2024-04-17 06:56:23.134192] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.720 [2024-04-17 06:56:23.134212] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.720 [2024-04-17 06:56:23.134224] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.720 [2024-04-17 06:56:23.137738] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.720 [2024-04-17 06:56:23.146908] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.720 [2024-04-17 06:56:23.147402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.720 [2024-04-17 06:56:23.147573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.720 [2024-04-17 06:56:23.147598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.720 [2024-04-17 06:56:23.147614] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.720 [2024-04-17 06:56:23.147863] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.720 [2024-04-17 06:56:23.148060] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.720 [2024-04-17 06:56:23.148078] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.720 [2024-04-17 06:56:23.148089] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.720 [2024-04-17 06:56:23.151605] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.720 [2024-04-17 06:56:23.160757] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.720 [2024-04-17 06:56:23.161331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.720 [2024-04-17 06:56:23.161558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.720 [2024-04-17 06:56:23.161583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.720 [2024-04-17 06:56:23.161599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.720 [2024-04-17 06:56:23.161851] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.720 [2024-04-17 06:56:23.162060] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.720 [2024-04-17 06:56:23.162079] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.720 [2024-04-17 06:56:23.162091] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.720 [2024-04-17 06:56:23.165599] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.720 [2024-04-17 06:56:23.174712] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.720 [2024-04-17 06:56:23.175156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.720 [2024-04-17 06:56:23.175366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.720 [2024-04-17 06:56:23.175393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.720 [2024-04-17 06:56:23.175409] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.720 [2024-04-17 06:56:23.175671] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.720 [2024-04-17 06:56:23.175863] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.720 [2024-04-17 06:56:23.175881] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.720 [2024-04-17 06:56:23.175893] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.720 [2024-04-17 06:56:23.179385] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.720 [2024-04-17 06:56:23.188565] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.720 [2024-04-17 06:56:23.188942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.720 [2024-04-17 06:56:23.189099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.720 [2024-04-17 06:56:23.189124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.720 [2024-04-17 06:56:23.189140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.720 [2024-04-17 06:56:23.189397] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.720 [2024-04-17 06:56:23.189607] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.720 [2024-04-17 06:56:23.189630] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.720 [2024-04-17 06:56:23.189643] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.720 [2024-04-17 06:56:23.193105] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.720 [2024-04-17 06:56:23.202470] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.720 [2024-04-17 06:56:23.202885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.720 [2024-04-17 06:56:23.203080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.720 [2024-04-17 06:56:23.203105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.720 [2024-04-17 06:56:23.203120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.720 [2024-04-17 06:56:23.203365] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.720 [2024-04-17 06:56:23.203575] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.720 [2024-04-17 06:56:23.203593] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.720 [2024-04-17 06:56:23.203606] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.720 [2024-04-17 06:56:23.207069] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.720 [2024-04-17 06:56:23.216468] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.720 [2024-04-17 06:56:23.216922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.721 [2024-04-17 06:56:23.217137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.721 [2024-04-17 06:56:23.217162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.721 [2024-04-17 06:56:23.217186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.721 [2024-04-17 06:56:23.217447] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.721 [2024-04-17 06:56:23.217673] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.721 [2024-04-17 06:56:23.217691] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.721 [2024-04-17 06:56:23.217703] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.721 [2024-04-17 06:56:23.221172] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.721 [2024-04-17 06:56:23.230379] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.721 [2024-04-17 06:56:23.230867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.721 [2024-04-17 06:56:23.231051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.721 [2024-04-17 06:56:23.231077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.721 [2024-04-17 06:56:23.231093] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.721 [2024-04-17 06:56:23.231329] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.721 [2024-04-17 06:56:23.231552] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.721 [2024-04-17 06:56:23.231571] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.721 [2024-04-17 06:56:23.231588] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.721 [2024-04-17 06:56:23.235066] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.721 [2024-04-17 06:56:23.244258] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.721 [2024-04-17 06:56:23.244655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.721 [2024-04-17 06:56:23.244811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.721 [2024-04-17 06:56:23.244835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.721 [2024-04-17 06:56:23.244850] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.721 [2024-04-17 06:56:23.245067] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.721 [2024-04-17 06:56:23.245302] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.721 [2024-04-17 06:56:23.245322] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.721 [2024-04-17 06:56:23.245334] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.721 [2024-04-17 06:56:23.248814] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.721 [2024-04-17 06:56:23.258190] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.721 [2024-04-17 06:56:23.258650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.721 [2024-04-17 06:56:23.258831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.721 [2024-04-17 06:56:23.258856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.721 [2024-04-17 06:56:23.258871] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.721 [2024-04-17 06:56:23.259122] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.721 [2024-04-17 06:56:23.259352] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.721 [2024-04-17 06:56:23.259373] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.721 [2024-04-17 06:56:23.259385] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.721 [2024-04-17 06:56:23.262868] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.721 [2024-04-17 06:56:23.272033] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.721 [2024-04-17 06:56:23.272538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.721 [2024-04-17 06:56:23.272731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.721 [2024-04-17 06:56:23.272756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.721 [2024-04-17 06:56:23.272771] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.721 [2024-04-17 06:56:23.273023] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.721 [2024-04-17 06:56:23.273239] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.721 [2024-04-17 06:56:23.273258] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.721 [2024-04-17 06:56:23.273270] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.721 [2024-04-17 06:56:23.276755] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.721 [2024-04-17 06:56:23.285920] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.721 [2024-04-17 06:56:23.286467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.721 [2024-04-17 06:56:23.286677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.721 [2024-04-17 06:56:23.286703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.721 [2024-04-17 06:56:23.286719] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.721 [2024-04-17 06:56:23.286968] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.721 [2024-04-17 06:56:23.287184] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.721 [2024-04-17 06:56:23.287203] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.721 [2024-04-17 06:56:23.287231] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.721 [2024-04-17 06:56:23.290762] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.721 [2024-04-17 06:56:23.299737] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.721 [2024-04-17 06:56:23.300188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.721 [2024-04-17 06:56:23.300372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.721 [2024-04-17 06:56:23.300397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.721 [2024-04-17 06:56:23.300413] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.721 [2024-04-17 06:56:23.300675] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.721 [2024-04-17 06:56:23.300866] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.721 [2024-04-17 06:56:23.300884] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.721 [2024-04-17 06:56:23.300896] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.721 [2024-04-17 06:56:23.304402] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.721 [2024-04-17 06:56:23.313804] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.721 [2024-04-17 06:56:23.314266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.721 [2024-04-17 06:56:23.314443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.721 [2024-04-17 06:56:23.314480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.721 [2024-04-17 06:56:23.314522] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.721 [2024-04-17 06:56:23.314817] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.981 [2024-04-17 06:56:23.315035] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.981 [2024-04-17 06:56:23.315055] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.981 [2024-04-17 06:56:23.315068] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.981 [2024-04-17 06:56:23.318873] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.981 [2024-04-17 06:56:23.327627] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.981 [2024-04-17 06:56:23.328090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.981 [2024-04-17 06:56:23.328271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.981 [2024-04-17 06:56:23.328298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.981 [2024-04-17 06:56:23.328314] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.981 [2024-04-17 06:56:23.328553] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.981 [2024-04-17 06:56:23.328760] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.981 [2024-04-17 06:56:23.328778] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.981 [2024-04-17 06:56:23.328790] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.981 [2024-04-17 06:56:23.332421] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.981 [2024-04-17 06:56:23.341571] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.981 [2024-04-17 06:56:23.341997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.981 [2024-04-17 06:56:23.342166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.981 [2024-04-17 06:56:23.342201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.981 [2024-04-17 06:56:23.342218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.981 [2024-04-17 06:56:23.342447] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.981 [2024-04-17 06:56:23.342655] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.981 [2024-04-17 06:56:23.342674] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.981 [2024-04-17 06:56:23.342685] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.981 [2024-04-17 06:56:23.346149] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.981 [2024-04-17 06:56:23.355551] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.981 [2024-04-17 06:56:23.355929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.981 [2024-04-17 06:56:23.356118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.981 [2024-04-17 06:56:23.356160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.981 [2024-04-17 06:56:23.356183] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.981 [2024-04-17 06:56:23.356437] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.981 [2024-04-17 06:56:23.356663] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.981 [2024-04-17 06:56:23.356682] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.981 [2024-04-17 06:56:23.356693] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.981 [2024-04-17 06:56:23.360158] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.981 [2024-04-17 06:56:23.369551] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.981 [2024-04-17 06:56:23.370080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.981 [2024-04-17 06:56:23.370281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.981 [2024-04-17 06:56:23.370308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.981 [2024-04-17 06:56:23.370324] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.981 [2024-04-17 06:56:23.370552] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.981 [2024-04-17 06:56:23.370784] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.981 [2024-04-17 06:56:23.370804] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.981 [2024-04-17 06:56:23.370817] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.981 [2024-04-17 06:56:23.374356] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.981 [2024-04-17 06:56:23.383552] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.981 [2024-04-17 06:56:23.384154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.981 [2024-04-17 06:56:23.384393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.981 [2024-04-17 06:56:23.384421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.981 [2024-04-17 06:56:23.384437] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.981 [2024-04-17 06:56:23.384679] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.981 [2024-04-17 06:56:23.384887] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.981 [2024-04-17 06:56:23.384905] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.981 [2024-04-17 06:56:23.384917] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.981 [2024-04-17 06:56:23.388433] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.981 [2024-04-17 06:56:23.397365] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.981 [2024-04-17 06:56:23.397980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.981 [2024-04-17 06:56:23.398211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.981 [2024-04-17 06:56:23.398240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.981 [2024-04-17 06:56:23.398257] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.981 [2024-04-17 06:56:23.398504] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.981 [2024-04-17 06:56:23.398712] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.981 [2024-04-17 06:56:23.398730] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.981 [2024-04-17 06:56:23.398742] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.981 [2024-04-17 06:56:23.402214] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.981 [2024-04-17 06:56:23.411374] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.981 [2024-04-17 06:56:23.411865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.981 [2024-04-17 06:56:23.412045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.981 [2024-04-17 06:56:23.412070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.981 [2024-04-17 06:56:23.412090] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.981 [2024-04-17 06:56:23.412350] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.981 [2024-04-17 06:56:23.412569] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.981 [2024-04-17 06:56:23.412588] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.981 [2024-04-17 06:56:23.412600] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.981 [2024-04-17 06:56:23.416072] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.981 [2024-04-17 06:56:23.425258] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.981 [2024-04-17 06:56:23.425725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.981 [2024-04-17 06:56:23.425926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.981 [2024-04-17 06:56:23.425952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.981 [2024-04-17 06:56:23.425968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.981 [2024-04-17 06:56:23.426233] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.981 [2024-04-17 06:56:23.426432] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.981 [2024-04-17 06:56:23.426450] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.981 [2024-04-17 06:56:23.426463] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.981 [2024-04-17 06:56:23.429942] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.981 [2024-04-17 06:56:23.439112] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.981 [2024-04-17 06:56:23.439642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.981 [2024-04-17 06:56:23.439835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.981 [2024-04-17 06:56:23.439861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.982 [2024-04-17 06:56:23.439876] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.982 [2024-04-17 06:56:23.440120] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.982 [2024-04-17 06:56:23.440340] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.982 [2024-04-17 06:56:23.440360] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.982 [2024-04-17 06:56:23.440372] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.982 [2024-04-17 06:56:23.443864] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.982 [2024-04-17 06:56:23.453033] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.982 [2024-04-17 06:56:23.453490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.982 [2024-04-17 06:56:23.453690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.982 [2024-04-17 06:56:23.453715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.982 [2024-04-17 06:56:23.453730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.982 [2024-04-17 06:56:23.453979] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.982 [2024-04-17 06:56:23.454171] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.982 [2024-04-17 06:56:23.454213] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.982 [2024-04-17 06:56:23.454225] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.982 [2024-04-17 06:56:23.457741] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.982 [2024-04-17 06:56:23.466878] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.982 [2024-04-17 06:56:23.467336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.982 [2024-04-17 06:56:23.467496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.982 [2024-04-17 06:56:23.467521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.982 [2024-04-17 06:56:23.467536] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.982 [2024-04-17 06:56:23.467787] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.982 [2024-04-17 06:56:23.467979] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.982 [2024-04-17 06:56:23.467997] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.982 [2024-04-17 06:56:23.468008] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.982 [2024-04-17 06:56:23.471504] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.982 [2024-04-17 06:56:23.480842] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.982 [2024-04-17 06:56:23.481280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.982 [2024-04-17 06:56:23.481466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.982 [2024-04-17 06:56:23.481506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.982 [2024-04-17 06:56:23.481521] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.982 [2024-04-17 06:56:23.481759] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.982 [2024-04-17 06:56:23.482018] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.982 [2024-04-17 06:56:23.482042] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.982 [2024-04-17 06:56:23.482056] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.982 [2024-04-17 06:56:23.485607] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.982 [2024-04-17 06:56:23.494749] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.982 [2024-04-17 06:56:23.495267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.982 [2024-04-17 06:56:23.495433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.982 [2024-04-17 06:56:23.495473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.982 [2024-04-17 06:56:23.495488] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.982 [2024-04-17 06:56:23.495720] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.982 [2024-04-17 06:56:23.495932] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.982 [2024-04-17 06:56:23.495951] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.982 [2024-04-17 06:56:23.495962] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.982 [2024-04-17 06:56:23.499462] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.982 [2024-04-17 06:56:23.508648] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.982 [2024-04-17 06:56:23.509096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.982 [2024-04-17 06:56:23.509268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.982 [2024-04-17 06:56:23.509294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.982 [2024-04-17 06:56:23.509309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.982 [2024-04-17 06:56:23.509546] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.982 [2024-04-17 06:56:23.509753] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.982 [2024-04-17 06:56:23.509771] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.982 [2024-04-17 06:56:23.509782] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.982 [2024-04-17 06:56:23.513259] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.982 [2024-04-17 06:56:23.522634] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.982 [2024-04-17 06:56:23.523029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.982 [2024-04-17 06:56:23.523206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.982 [2024-04-17 06:56:23.523232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.982 [2024-04-17 06:56:23.523248] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.982 [2024-04-17 06:56:23.523501] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.982 [2024-04-17 06:56:23.523692] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.982 [2024-04-17 06:56:23.523710] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.982 [2024-04-17 06:56:23.523721] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.982 [2024-04-17 06:56:23.527193] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.982 [2024-04-17 06:56:23.536577] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.982 [2024-04-17 06:56:23.537063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.982 [2024-04-17 06:56:23.537223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.982 [2024-04-17 06:56:23.537250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.982 [2024-04-17 06:56:23.537265] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.982 [2024-04-17 06:56:23.537505] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.982 [2024-04-17 06:56:23.537714] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.982 [2024-04-17 06:56:23.537738] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.982 [2024-04-17 06:56:23.537750] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.982 [2024-04-17 06:56:23.541224] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.982 [2024-04-17 06:56:23.550398] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.982 [2024-04-17 06:56:23.550840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.982 [2024-04-17 06:56:23.551012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.982 [2024-04-17 06:56:23.551037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.982 [2024-04-17 06:56:23.551053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.982 [2024-04-17 06:56:23.551301] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.982 [2024-04-17 06:56:23.551534] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.982 [2024-04-17 06:56:23.551552] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.982 [2024-04-17 06:56:23.551564] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.982 [2024-04-17 06:56:23.555032] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.982 [2024-04-17 06:56:23.564212] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.982 [2024-04-17 06:56:23.564882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.982 [2024-04-17 06:56:23.565090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.982 [2024-04-17 06:56:23.565116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.982 [2024-04-17 06:56:23.565132] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.982 [2024-04-17 06:56:23.565391] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.982 [2024-04-17 06:56:23.565622] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.982 [2024-04-17 06:56:23.565641] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.983 [2024-04-17 06:56:23.565653] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.983 [2024-04-17 06:56:23.569125] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.983 [2024-04-17 06:56:23.578105] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.983 [2024-04-17 06:56:23.578567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.983 [2024-04-17 06:56:23.578772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.983 [2024-04-17 06:56:23.578797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:18.983 [2024-04-17 06:56:23.578812] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:18.983 [2024-04-17 06:56:23.579054] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:18.983 [2024-04-17 06:56:23.579276] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.983 [2024-04-17 06:56:23.579296] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.983 [2024-04-17 06:56:23.579315] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.983 [2024-04-17 06:56:23.582814] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.242 [2024-04-17 06:56:23.592016] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.242 [2024-04-17 06:56:23.592554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.242 [2024-04-17 06:56:23.592730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.242 [2024-04-17 06:56:23.592759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.242 [2024-04-17 06:56:23.592775] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.242 [2024-04-17 06:56:23.593029] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.242 [2024-04-17 06:56:23.593290] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.242 [2024-04-17 06:56:23.593313] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.242 [2024-04-17 06:56:23.593334] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.242 [2024-04-17 06:56:23.596869] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.242 [2024-04-17 06:56:23.605847] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.242 [2024-04-17 06:56:23.606357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.242 [2024-04-17 06:56:23.606525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.242 [2024-04-17 06:56:23.606550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.242 [2024-04-17 06:56:23.606566] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.242 [2024-04-17 06:56:23.606816] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.242 [2024-04-17 06:56:23.607008] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.242 [2024-04-17 06:56:23.607026] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.242 [2024-04-17 06:56:23.607038] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.242 [2024-04-17 06:56:23.610565] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.242 [2024-04-17 06:56:23.619710] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.242 [2024-04-17 06:56:23.620142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.242 [2024-04-17 06:56:23.620316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.242 [2024-04-17 06:56:23.620342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.242 [2024-04-17 06:56:23.620357] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.242 [2024-04-17 06:56:23.620597] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.242 [2024-04-17 06:56:23.620865] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.242 [2024-04-17 06:56:23.620888] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.242 [2024-04-17 06:56:23.620904] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.242 [2024-04-17 06:56:23.624478] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.242 [2024-04-17 06:56:23.633580] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.242 [2024-04-17 06:56:23.634023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.242 [2024-04-17 06:56:23.634206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.242 [2024-04-17 06:56:23.634243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.242 [2024-04-17 06:56:23.634259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.242 [2024-04-17 06:56:23.634488] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.242 [2024-04-17 06:56:23.634697] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.242 [2024-04-17 06:56:23.634715] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.242 [2024-04-17 06:56:23.634727] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.242 [2024-04-17 06:56:23.638196] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.242 [2024-04-17 06:56:23.647410] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.242 [2024-04-17 06:56:23.648017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.242 [2024-04-17 06:56:23.648330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.242 [2024-04-17 06:56:23.648356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.242 [2024-04-17 06:56:23.648372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.242 [2024-04-17 06:56:23.648595] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.242 [2024-04-17 06:56:23.648802] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.242 [2024-04-17 06:56:23.648820] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.242 [2024-04-17 06:56:23.648832] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.242 [2024-04-17 06:56:23.652424] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.242 [2024-04-17 06:56:23.661373] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.242 [2024-04-17 06:56:23.661840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.242 [2024-04-17 06:56:23.661990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.242 [2024-04-17 06:56:23.662015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.242 [2024-04-17 06:56:23.662031] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.242 [2024-04-17 06:56:23.662298] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.243 [2024-04-17 06:56:23.662555] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.243 [2024-04-17 06:56:23.662574] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.243 [2024-04-17 06:56:23.662586] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.243 [2024-04-17 06:56:23.666075] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.243 [2024-04-17 06:56:23.675293] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.243 [2024-04-17 06:56:23.675769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.243 [2024-04-17 06:56:23.675957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.243 [2024-04-17 06:56:23.675982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.243 [2024-04-17 06:56:23.675998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.243 [2024-04-17 06:56:23.676252] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.243 [2024-04-17 06:56:23.676483] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.243 [2024-04-17 06:56:23.676502] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.243 [2024-04-17 06:56:23.676514] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.243 [2024-04-17 06:56:23.679975] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.243 [2024-04-17 06:56:23.689149] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.243 [2024-04-17 06:56:23.689610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.243 [2024-04-17 06:56:23.689820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.243 [2024-04-17 06:56:23.689859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.243 [2024-04-17 06:56:23.689874] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.243 [2024-04-17 06:56:23.690118] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.243 [2024-04-17 06:56:23.690357] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.243 [2024-04-17 06:56:23.690378] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.243 [2024-04-17 06:56:23.690391] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.243 [2024-04-17 06:56:23.693880] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.243 [2024-04-17 06:56:23.703058] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.243 [2024-04-17 06:56:23.703574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.243 [2024-04-17 06:56:23.703751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.243 [2024-04-17 06:56:23.703776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.243 [2024-04-17 06:56:23.703791] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.243 [2024-04-17 06:56:23.704044] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.243 [2024-04-17 06:56:23.704278] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.243 [2024-04-17 06:56:23.704298] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.243 [2024-04-17 06:56:23.704311] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.243 [2024-04-17 06:56:23.707810] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.243 [2024-04-17 06:56:23.717011] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.243 [2024-04-17 06:56:23.717513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.243 [2024-04-17 06:56:23.717726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.243 [2024-04-17 06:56:23.717751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.243 [2024-04-17 06:56:23.717767] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.243 [2024-04-17 06:56:23.718009] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.243 [2024-04-17 06:56:23.718258] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.243 [2024-04-17 06:56:23.718279] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.243 [2024-04-17 06:56:23.718292] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.243 [2024-04-17 06:56:23.721810] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.243 [2024-04-17 06:56:23.730967] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.243 [2024-04-17 06:56:23.731416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.243 [2024-04-17 06:56:23.731602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.243 [2024-04-17 06:56:23.731627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.243 [2024-04-17 06:56:23.731642] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.243 [2024-04-17 06:56:23.731904] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.243 [2024-04-17 06:56:23.732096] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.243 [2024-04-17 06:56:23.732114] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.243 [2024-04-17 06:56:23.732126] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.243 [2024-04-17 06:56:23.735676] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.243 [2024-04-17 06:56:23.744811] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.243 [2024-04-17 06:56:23.745299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.243 [2024-04-17 06:56:23.745465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.243 [2024-04-17 06:56:23.745490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.243 [2024-04-17 06:56:23.745506] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.243 [2024-04-17 06:56:23.745744] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.243 [2024-04-17 06:56:23.745952] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.243 [2024-04-17 06:56:23.745970] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.243 [2024-04-17 06:56:23.745982] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.243 [2024-04-17 06:56:23.749512] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.243 [2024-04-17 06:56:23.758747] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.243 [2024-04-17 06:56:23.759167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.243 [2024-04-17 06:56:23.759354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.243 [2024-04-17 06:56:23.759384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.243 [2024-04-17 06:56:23.759401] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.243 [2024-04-17 06:56:23.759648] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.243 [2024-04-17 06:56:23.759855] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.243 [2024-04-17 06:56:23.759874] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.243 [2024-04-17 06:56:23.759886] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.243 [2024-04-17 06:56:23.763375] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.243 [2024-04-17 06:56:23.772720] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.243 [2024-04-17 06:56:23.773140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.243 [2024-04-17 06:56:23.773327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.243 [2024-04-17 06:56:23.773353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.243 [2024-04-17 06:56:23.773369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.243 [2024-04-17 06:56:23.773607] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.243 [2024-04-17 06:56:23.773814] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.243 [2024-04-17 06:56:23.773832] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.243 [2024-04-17 06:56:23.773844] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.243 [2024-04-17 06:56:23.777279] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.243 [2024-04-17 06:56:23.786648] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.243 [2024-04-17 06:56:23.787218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.243 [2024-04-17 06:56:23.787428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.243 [2024-04-17 06:56:23.787454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.243 [2024-04-17 06:56:23.787469] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.243 [2024-04-17 06:56:23.787715] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.243 [2024-04-17 06:56:23.787907] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.243 [2024-04-17 06:56:23.787925] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.243 [2024-04-17 06:56:23.787936] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.243 [2024-04-17 06:56:23.791429] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.244 [2024-04-17 06:56:23.800567] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.244 [2024-04-17 06:56:23.800989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.244 [2024-04-17 06:56:23.801195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.244 [2024-04-17 06:56:23.801221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.244 [2024-04-17 06:56:23.801242] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.244 [2024-04-17 06:56:23.801482] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.244 [2024-04-17 06:56:23.801689] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.244 [2024-04-17 06:56:23.801708] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.244 [2024-04-17 06:56:23.801719] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.244 [2024-04-17 06:56:23.805194] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.244 [2024-04-17 06:56:23.814370] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.244 [2024-04-17 06:56:23.814761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.244 [2024-04-17 06:56:23.814925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.244 [2024-04-17 06:56:23.814950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.244 [2024-04-17 06:56:23.814965] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.244 [2024-04-17 06:56:23.815224] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.244 [2024-04-17 06:56:23.815429] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.244 [2024-04-17 06:56:23.815448] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.244 [2024-04-17 06:56:23.815461] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.244 [2024-04-17 06:56:23.818943] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.244 [2024-04-17 06:56:23.828330] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.244 [2024-04-17 06:56:23.828715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.244 [2024-04-17 06:56:23.828878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.244 [2024-04-17 06:56:23.828903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.244 [2024-04-17 06:56:23.828917] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.244 [2024-04-17 06:56:23.829149] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.244 [2024-04-17 06:56:23.829391] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.244 [2024-04-17 06:56:23.829411] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.244 [2024-04-17 06:56:23.829423] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.244 [2024-04-17 06:56:23.832932] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.244 [2024-04-17 06:56:23.842306] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.244 [2024-04-17 06:56:23.842761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.244 [2024-04-17 06:56:23.842927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.244 [2024-04-17 06:56:23.842967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.244 [2024-04-17 06:56:23.842982] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.244 [2024-04-17 06:56:23.843239] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.244 [2024-04-17 06:56:23.843443] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.244 [2024-04-17 06:56:23.843463] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.244 [2024-04-17 06:56:23.843475] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.244 [2024-04-17 06:56:23.847202] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.504 [2024-04-17 06:56:23.856342] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.504 [2024-04-17 06:56:23.856805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-04-17 06:56:23.856991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-04-17 06:56:23.857017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.504 [2024-04-17 06:56:23.857033] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.504 [2024-04-17 06:56:23.857298] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.504 [2024-04-17 06:56:23.857517] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.504 [2024-04-17 06:56:23.857536] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.504 [2024-04-17 06:56:23.857548] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.504 [2024-04-17 06:56:23.861014] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.504 [2024-04-17 06:56:23.870213] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.504 [2024-04-17 06:56:23.870672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-04-17 06:56:23.870859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-04-17 06:56:23.870885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.504 [2024-04-17 06:56:23.870900] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.504 [2024-04-17 06:56:23.871128] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.504 [2024-04-17 06:56:23.871383] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.504 [2024-04-17 06:56:23.871405] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.504 [2024-04-17 06:56:23.871418] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.504 [2024-04-17 06:56:23.874946] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.504 [2024-04-17 06:56:23.884126] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.504 [2024-04-17 06:56:23.884517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-04-17 06:56:23.884729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-04-17 06:56:23.884755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.504 [2024-04-17 06:56:23.884771] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.504 [2024-04-17 06:56:23.885019] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.504 [2024-04-17 06:56:23.885276] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.504 [2024-04-17 06:56:23.885296] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.504 [2024-04-17 06:56:23.885309] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.504 [2024-04-17 06:56:23.888820] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.504 [2024-04-17 06:56:23.897989] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.504 [2024-04-17 06:56:23.898365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-04-17 06:56:23.898528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-04-17 06:56:23.898553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.504 [2024-04-17 06:56:23.898568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.504 [2024-04-17 06:56:23.898799] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.504 [2024-04-17 06:56:23.899005] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.504 [2024-04-17 06:56:23.899024] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.504 [2024-04-17 06:56:23.899035] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.504 [2024-04-17 06:56:23.902522] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.504 [2024-04-17 06:56:23.911865] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.504 [2024-04-17 06:56:23.912482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-04-17 06:56:23.912696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-04-17 06:56:23.912722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.504 [2024-04-17 06:56:23.912737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.504 [2024-04-17 06:56:23.912998] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.504 [2024-04-17 06:56:23.913214] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.504 [2024-04-17 06:56:23.913233] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.504 [2024-04-17 06:56:23.913246] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.504 [2024-04-17 06:56:23.916736] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.504 [2024-04-17 06:56:23.925710] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.504 [2024-04-17 06:56:23.926171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-04-17 06:56:23.926330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-04-17 06:56:23.926355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.504 [2024-04-17 06:56:23.926371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.504 [2024-04-17 06:56:23.926619] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.504 [2024-04-17 06:56:23.926846] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.504 [2024-04-17 06:56:23.926869] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.504 [2024-04-17 06:56:23.926882] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.504 [2024-04-17 06:56:23.930414] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.504 [2024-04-17 06:56:23.939555] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.504 [2024-04-17 06:56:23.940012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-04-17 06:56:23.940209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-04-17 06:56:23.940235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.504 [2024-04-17 06:56:23.940251] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.504 [2024-04-17 06:56:23.940477] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.504 [2024-04-17 06:56:23.940701] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.504 [2024-04-17 06:56:23.940720] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.504 [2024-04-17 06:56:23.940732] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.504 [2024-04-17 06:56:23.944203] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.504 [2024-04-17 06:56:23.953380] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.504 [2024-04-17 06:56:23.953786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-04-17 06:56:23.954018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-04-17 06:56:23.954044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.504 [2024-04-17 06:56:23.954059] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.504 [2024-04-17 06:56:23.954326] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.504 [2024-04-17 06:56:23.954549] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.504 [2024-04-17 06:56:23.954568] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.504 [2024-04-17 06:56:23.954579] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.505 [2024-04-17 06:56:23.958046] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.505 [2024-04-17 06:56:23.967223] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.505 [2024-04-17 06:56:23.967698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-04-17 06:56:23.967887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-04-17 06:56:23.967912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.505 [2024-04-17 06:56:23.967928] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.505 [2024-04-17 06:56:23.968165] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.505 [2024-04-17 06:56:23.968393] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.505 [2024-04-17 06:56:23.968413] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.505 [2024-04-17 06:56:23.968430] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.505 [2024-04-17 06:56:23.971926] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.505 [2024-04-17 06:56:23.981090] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.505 [2024-04-17 06:56:23.981791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-04-17 06:56:23.982041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-04-17 06:56:23.982069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.505 [2024-04-17 06:56:23.982085] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.505 [2024-04-17 06:56:23.982341] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.505 [2024-04-17 06:56:23.982557] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.505 [2024-04-17 06:56:23.982576] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.505 [2024-04-17 06:56:23.982588] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.505 [2024-04-17 06:56:23.986054] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.505 [2024-04-17 06:56:23.995021] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.505 [2024-04-17 06:56:23.995524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-04-17 06:56:23.995740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-04-17 06:56:23.995766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.505 [2024-04-17 06:56:23.995782] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.505 [2024-04-17 06:56:23.996030] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.505 [2024-04-17 06:56:23.996265] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.505 [2024-04-17 06:56:23.996285] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.505 [2024-04-17 06:56:23.996297] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.505 [2024-04-17 06:56:23.999781] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.505 [2024-04-17 06:56:24.008929] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.505 [2024-04-17 06:56:24.009531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-04-17 06:56:24.009747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-04-17 06:56:24.009789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.505 [2024-04-17 06:56:24.009806] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.505 [2024-04-17 06:56:24.010044] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.505 [2024-04-17 06:56:24.010266] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.505 [2024-04-17 06:56:24.010286] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.505 [2024-04-17 06:56:24.010299] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.505 [2024-04-17 06:56:24.013817] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.505 [2024-04-17 06:56:24.022797] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.505 [2024-04-17 06:56:24.023403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-04-17 06:56:24.023638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-04-17 06:56:24.023665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.505 [2024-04-17 06:56:24.023682] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.505 [2024-04-17 06:56:24.023938] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.505 [2024-04-17 06:56:24.024132] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.505 [2024-04-17 06:56:24.024150] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.505 [2024-04-17 06:56:24.024162] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.505 [2024-04-17 06:56:24.027686] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.505 [2024-04-17 06:56:24.036616] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.505 [2024-04-17 06:56:24.037013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-04-17 06:56:24.037213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-04-17 06:56:24.037240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.505 [2024-04-17 06:56:24.037256] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.505 [2024-04-17 06:56:24.037495] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.505 [2024-04-17 06:56:24.037704] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.505 [2024-04-17 06:56:24.037722] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.505 [2024-04-17 06:56:24.037734] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.505 [2024-04-17 06:56:24.041209] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.505 [2024-04-17 06:56:24.050592] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.505 [2024-04-17 06:56:24.051044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-04-17 06:56:24.051318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-04-17 06:56:24.051345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.505 [2024-04-17 06:56:24.051360] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.505 [2024-04-17 06:56:24.051612] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.505 [2024-04-17 06:56:24.051818] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.505 [2024-04-17 06:56:24.051836] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.505 [2024-04-17 06:56:24.051848] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.505 [2024-04-17 06:56:24.055340] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.505 [2024-04-17 06:56:24.064505] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.505 [2024-04-17 06:56:24.064958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-04-17 06:56:24.065143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-04-17 06:56:24.065168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.505 [2024-04-17 06:56:24.065193] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.505 [2024-04-17 06:56:24.065436] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.505 [2024-04-17 06:56:24.065645] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.505 [2024-04-17 06:56:24.065663] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.505 [2024-04-17 06:56:24.065675] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.505 [2024-04-17 06:56:24.069141] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.505 [2024-04-17 06:56:24.078332] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.505 [2024-04-17 06:56:24.078756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-04-17 06:56:24.078974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-04-17 06:56:24.078999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.505 [2024-04-17 06:56:24.079015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.505 [2024-04-17 06:56:24.079275] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.505 [2024-04-17 06:56:24.079488] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.505 [2024-04-17 06:56:24.079507] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.505 [2024-04-17 06:56:24.079518] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.505 [2024-04-17 06:56:24.082980] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.505 [2024-04-17 06:56:24.092147] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.505 [2024-04-17 06:56:24.092600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-04-17 06:56:24.092811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-04-17 06:56:24.092837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.506 [2024-04-17 06:56:24.092853] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.506 [2024-04-17 06:56:24.093111] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.506 [2024-04-17 06:56:24.093330] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.506 [2024-04-17 06:56:24.093350] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.506 [2024-04-17 06:56:24.093362] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.506 [2024-04-17 06:56:24.096850] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.506 [2024-04-17 06:56:24.106087] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.506 [2024-04-17 06:56:24.106607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-04-17 06:56:24.106780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-04-17 06:56:24.106806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.506 [2024-04-17 06:56:24.106822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.506 [2024-04-17 06:56:24.107064] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.506 [2024-04-17 06:56:24.107317] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.506 [2024-04-17 06:56:24.107337] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.506 [2024-04-17 06:56:24.107350] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.764 [2024-04-17 06:56:24.111121] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.764 [2024-04-17 06:56:24.120086] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.764 [2024-04-17 06:56:24.120611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.764 [2024-04-17 06:56:24.120826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.764 [2024-04-17 06:56:24.120852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.764 [2024-04-17 06:56:24.120868] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.764 [2024-04-17 06:56:24.121096] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.764 [2024-04-17 06:56:24.121351] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.764 [2024-04-17 06:56:24.121373] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.764 [2024-04-17 06:56:24.121387] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.764 [2024-04-17 06:56:24.124914] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.764 [2024-04-17 06:56:24.133957] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.764 [2024-04-17 06:56:24.134492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.764 [2024-04-17 06:56:24.134699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.764 [2024-04-17 06:56:24.134725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.764 [2024-04-17 06:56:24.134741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.764 [2024-04-17 06:56:24.134980] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.764 [2024-04-17 06:56:24.135200] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.764 [2024-04-17 06:56:24.135235] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.764 [2024-04-17 06:56:24.135249] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.764 [2024-04-17 06:56:24.138791] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.764 [2024-04-17 06:56:24.147806] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.764 [2024-04-17 06:56:24.148319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.765 [2024-04-17 06:56:24.148517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.765 [2024-04-17 06:56:24.148542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.765 [2024-04-17 06:56:24.148564] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.765 [2024-04-17 06:56:24.148814] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.765 [2024-04-17 06:56:24.149006] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.765 [2024-04-17 06:56:24.149024] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.765 [2024-04-17 06:56:24.149036] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.765 [2024-04-17 06:56:24.152551] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.765 [2024-04-17 06:56:24.161727] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.765 [2024-04-17 06:56:24.162312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.765 [2024-04-17 06:56:24.162501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.765 [2024-04-17 06:56:24.162528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.765 [2024-04-17 06:56:24.162543] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.765 [2024-04-17 06:56:24.162793] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.765 [2024-04-17 06:56:24.163005] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.765 [2024-04-17 06:56:24.163023] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.765 [2024-04-17 06:56:24.163036] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.765 [2024-04-17 06:56:24.166571] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.765 [2024-04-17 06:56:24.175732] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.765 [2024-04-17 06:56:24.176236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.765 [2024-04-17 06:56:24.176423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.765 [2024-04-17 06:56:24.176448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.765 [2024-04-17 06:56:24.176463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.765 [2024-04-17 06:56:24.176725] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.765 [2024-04-17 06:56:24.176917] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.765 [2024-04-17 06:56:24.176935] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.765 [2024-04-17 06:56:24.176947] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.765 [2024-04-17 06:56:24.180468] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.765 [2024-04-17 06:56:24.189617] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.765 [2024-04-17 06:56:24.190241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.765 [2024-04-17 06:56:24.190438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.765 [2024-04-17 06:56:24.190463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.765 [2024-04-17 06:56:24.190479] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.765 [2024-04-17 06:56:24.190729] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.765 [2024-04-17 06:56:24.190921] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.765 [2024-04-17 06:56:24.190939] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.765 [2024-04-17 06:56:24.190951] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.765 [2024-04-17 06:56:24.194454] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.765 [2024-04-17 06:56:24.203616] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.765 [2024-04-17 06:56:24.204187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.765 [2024-04-17 06:56:24.204407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.765 [2024-04-17 06:56:24.204432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.765 [2024-04-17 06:56:24.204448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.765 [2024-04-17 06:56:24.204698] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.765 [2024-04-17 06:56:24.204890] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.765 [2024-04-17 06:56:24.204908] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.765 [2024-04-17 06:56:24.204920] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.765 [2024-04-17 06:56:24.208479] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.765 [2024-04-17 06:56:24.217432] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.765 [2024-04-17 06:56:24.217988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.765 [2024-04-17 06:56:24.218206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.765 [2024-04-17 06:56:24.218235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.765 [2024-04-17 06:56:24.218250] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.765 [2024-04-17 06:56:24.218493] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.765 [2024-04-17 06:56:24.218684] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.765 [2024-04-17 06:56:24.218702] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.765 [2024-04-17 06:56:24.218714] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.765 [2024-04-17 06:56:24.222195] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.765 [2024-04-17 06:56:24.231376] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.765 [2024-04-17 06:56:24.231892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.765 [2024-04-17 06:56:24.232040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.765 [2024-04-17 06:56:24.232065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.765 [2024-04-17 06:56:24.232080] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.765 [2024-04-17 06:56:24.232327] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.765 [2024-04-17 06:56:24.232564] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.765 [2024-04-17 06:56:24.232583] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.765 [2024-04-17 06:56:24.232595] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.765 [2024-04-17 06:56:24.236062] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.765 [2024-04-17 06:56:24.245256] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.765 [2024-04-17 06:56:24.245683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.765 [2024-04-17 06:56:24.245961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.765 [2024-04-17 06:56:24.245986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.765 [2024-04-17 06:56:24.246001] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.765 [2024-04-17 06:56:24.246263] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.765 [2024-04-17 06:56:24.246475] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.765 [2024-04-17 06:56:24.246494] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.765 [2024-04-17 06:56:24.246520] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.765 [2024-04-17 06:56:24.249995] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.765 [2024-04-17 06:56:24.259194] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.765 [2024-04-17 06:56:24.259592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.765 [2024-04-17 06:56:24.259761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.765 [2024-04-17 06:56:24.259786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.765 [2024-04-17 06:56:24.259801] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.765 [2024-04-17 06:56:24.260036] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.765 [2024-04-17 06:56:24.260272] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.765 [2024-04-17 06:56:24.260293] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.765 [2024-04-17 06:56:24.260305] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.765 [2024-04-17 06:56:24.263819] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.765 [2024-04-17 06:56:24.273182] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.765 [2024-04-17 06:56:24.273642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.765 [2024-04-17 06:56:24.273890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.765 [2024-04-17 06:56:24.273915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.765 [2024-04-17 06:56:24.273930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.766 [2024-04-17 06:56:24.274186] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.766 [2024-04-17 06:56:24.274396] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.766 [2024-04-17 06:56:24.274420] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.766 [2024-04-17 06:56:24.274433] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.766 [2024-04-17 06:56:24.277914] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.766 [2024-04-17 06:56:24.287082] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.766 [2024-04-17 06:56:24.287773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.766 [2024-04-17 06:56:24.288116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.766 [2024-04-17 06:56:24.288143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.766 [2024-04-17 06:56:24.288160] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.766 [2024-04-17 06:56:24.288415] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.766 [2024-04-17 06:56:24.288646] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.766 [2024-04-17 06:56:24.288665] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.766 [2024-04-17 06:56:24.288677] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.766 [2024-04-17 06:56:24.292152] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.766 [2024-04-17 06:56:24.300914] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.766 [2024-04-17 06:56:24.301373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.766 [2024-04-17 06:56:24.301565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.766 [2024-04-17 06:56:24.301590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.766 [2024-04-17 06:56:24.301606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.766 [2024-04-17 06:56:24.301865] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.766 [2024-04-17 06:56:24.302057] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.766 [2024-04-17 06:56:24.302076] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.766 [2024-04-17 06:56:24.302089] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.766 [2024-04-17 06:56:24.305615] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.766 [2024-04-17 06:56:24.314757] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.766 [2024-04-17 06:56:24.315212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.766 [2024-04-17 06:56:24.315404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.766 [2024-04-17 06:56:24.315430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.766 [2024-04-17 06:56:24.315446] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.766 [2024-04-17 06:56:24.315692] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.766 [2024-04-17 06:56:24.315884] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.766 [2024-04-17 06:56:24.315902] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.766 [2024-04-17 06:56:24.315919] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.766 [2024-04-17 06:56:24.319429] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.766 [2024-04-17 06:56:24.328556] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.766 [2024-04-17 06:56:24.329040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.766 [2024-04-17 06:56:24.329234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.766 [2024-04-17 06:56:24.329261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.766 [2024-04-17 06:56:24.329276] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.766 [2024-04-17 06:56:24.329515] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.766 [2024-04-17 06:56:24.329722] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.766 [2024-04-17 06:56:24.329741] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.766 [2024-04-17 06:56:24.329753] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.766 [2024-04-17 06:56:24.333220] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.766 [2024-04-17 06:56:24.342437] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.766 [2024-04-17 06:56:24.342953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.766 [2024-04-17 06:56:24.343120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.766 [2024-04-17 06:56:24.343147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.766 [2024-04-17 06:56:24.343162] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.766 [2024-04-17 06:56:24.343410] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.766 [2024-04-17 06:56:24.343637] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.766 [2024-04-17 06:56:24.343655] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.766 [2024-04-17 06:56:24.343667] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.766 [2024-04-17 06:56:24.347138] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.766 [2024-04-17 06:56:24.356350] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.766 [2024-04-17 06:56:24.356969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.766 [2024-04-17 06:56:24.357189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.766 [2024-04-17 06:56:24.357217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:19.766 [2024-04-17 06:56:24.357233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:19.766 [2024-04-17 06:56:24.357476] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:19.766 [2024-04-17 06:56:24.357685] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.766 [2024-04-17 06:56:24.357703] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.766 [2024-04-17 06:56:24.357715] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.766 [2024-04-17 06:56:24.361198] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.766 [2024-04-17 06:56:24.370400] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.766 [2024-04-17 06:56:24.370972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.026 [2024-04-17 06:56:24.371139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.026 [2024-04-17 06:56:24.371165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.026 [2024-04-17 06:56:24.371195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.026 [2024-04-17 06:56:24.371471] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.026 [2024-04-17 06:56:24.371725] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.026 [2024-04-17 06:56:24.371746] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.026 [2024-04-17 06:56:24.371775] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.026 [2024-04-17 06:56:24.375381] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.026 [2024-04-17 06:56:24.384279] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.026 [2024-04-17 06:56:24.384748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.026 [2024-04-17 06:56:24.384910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.026 [2024-04-17 06:56:24.384936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.026 [2024-04-17 06:56:24.384952] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.026 [2024-04-17 06:56:24.385214] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.026 [2024-04-17 06:56:24.385433] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.026 [2024-04-17 06:56:24.385454] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.026 [2024-04-17 06:56:24.385467] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.026 [2024-04-17 06:56:24.388949] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.026 [2024-04-17 06:56:24.398124] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.026 [2024-04-17 06:56:24.398557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.026 [2024-04-17 06:56:24.398716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.026 [2024-04-17 06:56:24.398742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.026 [2024-04-17 06:56:24.398758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.026 [2024-04-17 06:56:24.399005] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.026 [2024-04-17 06:56:24.399224] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.026 [2024-04-17 06:56:24.399245] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.026 [2024-04-17 06:56:24.399258] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.026 [2024-04-17 06:56:24.402753] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.026 [2024-04-17 06:56:24.412120] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.026 [2024-04-17 06:56:24.412592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.026 [2024-04-17 06:56:24.412752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.026 [2024-04-17 06:56:24.412778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.026 [2024-04-17 06:56:24.412793] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.026 [2024-04-17 06:56:24.413040] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.026 [2024-04-17 06:56:24.413276] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.026 [2024-04-17 06:56:24.413297] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.026 [2024-04-17 06:56:24.413310] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.026 [2024-04-17 06:56:24.416820] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.026 [2024-04-17 06:56:24.426000] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.026 [2024-04-17 06:56:24.426595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.026 [2024-04-17 06:56:24.426850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.026 [2024-04-17 06:56:24.426890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.026 [2024-04-17 06:56:24.426905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.026 [2024-04-17 06:56:24.427150] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.026 [2024-04-17 06:56:24.427382] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.026 [2024-04-17 06:56:24.427403] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.026 [2024-04-17 06:56:24.427416] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.026 [2024-04-17 06:56:24.430928] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.026 [2024-04-17 06:56:24.439787] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.026 [2024-04-17 06:56:24.440261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.026 [2024-04-17 06:56:24.440426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.026 [2024-04-17 06:56:24.440452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.026 [2024-04-17 06:56:24.440467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.026 [2024-04-17 06:56:24.440723] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.026 [2024-04-17 06:56:24.440960] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.026 [2024-04-17 06:56:24.440983] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.026 [2024-04-17 06:56:24.440998] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.026 [2024-04-17 06:56:24.444495] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.026 [2024-04-17 06:56:24.453646] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.026 [2024-04-17 06:56:24.454101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.026 [2024-04-17 06:56:24.454280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.026 [2024-04-17 06:56:24.454314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.026 [2024-04-17 06:56:24.454329] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.026 [2024-04-17 06:56:24.454568] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.026 [2024-04-17 06:56:24.454781] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.026 [2024-04-17 06:56:24.454800] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.026 [2024-04-17 06:56:24.454812] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.026 [2024-04-17 06:56:24.458303] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.026 [2024-04-17 06:56:24.467478] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.026 [2024-04-17 06:56:24.467907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.026 [2024-04-17 06:56:24.468030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.026 [2024-04-17 06:56:24.468055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.026 [2024-04-17 06:56:24.468070] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.026 [2024-04-17 06:56:24.468290] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.026 [2024-04-17 06:56:24.468544] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.026 [2024-04-17 06:56:24.468563] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.026 [2024-04-17 06:56:24.468575] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.026 [2024-04-17 06:56:24.472051] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.026 [2024-04-17 06:56:24.481438] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.026 [2024-04-17 06:56:24.481880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.026 [2024-04-17 06:56:24.482033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.026 [2024-04-17 06:56:24.482060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.026 [2024-04-17 06:56:24.482076] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.026 [2024-04-17 06:56:24.482323] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.026 [2024-04-17 06:56:24.482540] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.026 [2024-04-17 06:56:24.482559] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.026 [2024-04-17 06:56:24.482571] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.026 [2024-04-17 06:56:24.486034] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.026 [2024-04-17 06:56:24.495059] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.026 [2024-04-17 06:56:24.495461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.026 [2024-04-17 06:56:24.495646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.026 [2024-04-17 06:56:24.495676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.026 [2024-04-17 06:56:24.495692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.026 [2024-04-17 06:56:24.495904] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.026 [2024-04-17 06:56:24.496146] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.026 [2024-04-17 06:56:24.496167] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.026 [2024-04-17 06:56:24.496191] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.026 [2024-04-17 06:56:24.499379] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.026 [2024-04-17 06:56:24.508311] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.026 [2024-04-17 06:56:24.508776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.026 [2024-04-17 06:56:24.508945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.026 [2024-04-17 06:56:24.508985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.026 [2024-04-17 06:56:24.509000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.026 [2024-04-17 06:56:24.509246] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.026 [2024-04-17 06:56:24.509471] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.026 [2024-04-17 06:56:24.509491] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.026 [2024-04-17 06:56:24.509503] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.026 [2024-04-17 06:56:24.512472] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.026 [2024-04-17 06:56:24.521529] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.027 [2024-04-17 06:56:24.521892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.027 [2024-04-17 06:56:24.522122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.027 [2024-04-17 06:56:24.522146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.027 [2024-04-17 06:56:24.522161] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.027 [2024-04-17 06:56:24.522382] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.027 [2024-04-17 06:56:24.522631] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.027 [2024-04-17 06:56:24.522650] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.027 [2024-04-17 06:56:24.522662] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.027 [2024-04-17 06:56:24.525675] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.027 [2024-04-17 06:56:24.534797] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.027 [2024-04-17 06:56:24.535287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.027 [2024-04-17 06:56:24.535471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.027 [2024-04-17 06:56:24.535497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.027 [2024-04-17 06:56:24.535517] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.027 [2024-04-17 06:56:24.535769] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.027 [2024-04-17 06:56:24.535966] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.027 [2024-04-17 06:56:24.535984] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.027 [2024-04-17 06:56:24.535997] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.027 [2024-04-17 06:56:24.539045] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.027 [2024-04-17 06:56:24.548118] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.027 [2024-04-17 06:56:24.548516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.027 [2024-04-17 06:56:24.548737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.027 [2024-04-17 06:56:24.548762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.027 [2024-04-17 06:56:24.548778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.027 [2024-04-17 06:56:24.549028] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.027 [2024-04-17 06:56:24.549266] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.027 [2024-04-17 06:56:24.549287] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.027 [2024-04-17 06:56:24.549301] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.027 [2024-04-17 06:56:24.552280] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.027 [2024-04-17 06:56:24.561316] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.027 [2024-04-17 06:56:24.561765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.027 [2024-04-17 06:56:24.561938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.027 [2024-04-17 06:56:24.561963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.027 [2024-04-17 06:56:24.561979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.027 [2024-04-17 06:56:24.562235] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.027 [2024-04-17 06:56:24.562453] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.027 [2024-04-17 06:56:24.562491] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.027 [2024-04-17 06:56:24.562504] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.027 [2024-04-17 06:56:24.565512] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.027 [2024-04-17 06:56:24.574631] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.027 [2024-04-17 06:56:24.575047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.027 [2024-04-17 06:56:24.575234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.027 [2024-04-17 06:56:24.575260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.027 [2024-04-17 06:56:24.575275] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.027 [2024-04-17 06:56:24.575522] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.027 [2024-04-17 06:56:24.575719] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.027 [2024-04-17 06:56:24.575738] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.027 [2024-04-17 06:56:24.575750] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.027 [2024-04-17 06:56:24.578701] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.027 [2024-04-17 06:56:24.587947] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.027 [2024-04-17 06:56:24.588385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.027 [2024-04-17 06:56:24.588551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.027 [2024-04-17 06:56:24.588576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.027 [2024-04-17 06:56:24.588591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.027 [2024-04-17 06:56:24.588832] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.027 [2024-04-17 06:56:24.589030] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.027 [2024-04-17 06:56:24.589049] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.027 [2024-04-17 06:56:24.589062] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.027 [2024-04-17 06:56:24.591960] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.027 [2024-04-17 06:56:24.601266] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.027 [2024-04-17 06:56:24.601664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.027 [2024-04-17 06:56:24.601834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.027 [2024-04-17 06:56:24.601859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.027 [2024-04-17 06:56:24.601874] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.027 [2024-04-17 06:56:24.602127] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.027 [2024-04-17 06:56:24.602374] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.027 [2024-04-17 06:56:24.602396] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.027 [2024-04-17 06:56:24.602409] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.027 [2024-04-17 06:56:24.605340] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.027 [2024-04-17 06:56:24.614525] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.027 [2024-04-17 06:56:24.614940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.027 [2024-04-17 06:56:24.615105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.027 [2024-04-17 06:56:24.615130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.027 [2024-04-17 06:56:24.615145] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.027 [2024-04-17 06:56:24.615381] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.027 [2024-04-17 06:56:24.615621] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.027 [2024-04-17 06:56:24.615641] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.027 [2024-04-17 06:56:24.615653] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.027 [2024-04-17 06:56:24.618589] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.027 [2024-04-17 06:56:24.627858] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.027 [2024-04-17 06:56:24.628319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.027 [2024-04-17 06:56:24.628467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.027 [2024-04-17 06:56:24.628496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.027 [2024-04-17 06:56:24.628521] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.027 [2024-04-17 06:56:24.628748] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.027 [2024-04-17 06:56:24.628980] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.027 [2024-04-17 06:56:24.629003] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.027 [2024-04-17 06:56:24.629016] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.287 [2024-04-17 06:56:24.632501] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.287 [2024-04-17 06:56:24.641328] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.287 [2024-04-17 06:56:24.641720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.287 [2024-04-17 06:56:24.641909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.287 [2024-04-17 06:56:24.641935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.287 [2024-04-17 06:56:24.641952] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.287 [2024-04-17 06:56:24.642226] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.287 [2024-04-17 06:56:24.642444] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.287 [2024-04-17 06:56:24.642465] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.287 [2024-04-17 06:56:24.642478] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.287 [2024-04-17 06:56:24.645572] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.287 [2024-04-17 06:56:24.654794] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.287 [2024-04-17 06:56:24.655212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.287 [2024-04-17 06:56:24.655379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.287 [2024-04-17 06:56:24.655405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.287 [2024-04-17 06:56:24.655420] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.287 [2024-04-17 06:56:24.655658] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.287 [2024-04-17 06:56:24.655856] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.287 [2024-04-17 06:56:24.655883] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.287 [2024-04-17 06:56:24.655895] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.287 [2024-04-17 06:56:24.658933] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.287 [2024-04-17 06:56:24.668087] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.287 [2024-04-17 06:56:24.668476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.287 [2024-04-17 06:56:24.668666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.287 [2024-04-17 06:56:24.668691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.287 [2024-04-17 06:56:24.668707] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.287 [2024-04-17 06:56:24.668958] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.287 [2024-04-17 06:56:24.669170] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.287 [2024-04-17 06:56:24.669195] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.287 [2024-04-17 06:56:24.669208] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.287 [2024-04-17 06:56:24.672242] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.287 [2024-04-17 06:56:24.681336] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.287 [2024-04-17 06:56:24.681782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.287 [2024-04-17 06:56:24.682014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.287 [2024-04-17 06:56:24.682039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.287 [2024-04-17 06:56:24.682055] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.287 [2024-04-17 06:56:24.682322] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.287 [2024-04-17 06:56:24.682545] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.287 [2024-04-17 06:56:24.682565] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.287 [2024-04-17 06:56:24.682577] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.287 [2024-04-17 06:56:24.685733] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.287 [2024-04-17 06:56:24.694629] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.287 [2024-04-17 06:56:24.695110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.287 [2024-04-17 06:56:24.695290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.287 [2024-04-17 06:56:24.695317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.287 [2024-04-17 06:56:24.695332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.287 [2024-04-17 06:56:24.695573] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.288 [2024-04-17 06:56:24.695786] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.288 [2024-04-17 06:56:24.695805] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.288 [2024-04-17 06:56:24.695821] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.288 [2024-04-17 06:56:24.698785] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.288 [2024-04-17 06:56:24.707828] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.288 [2024-04-17 06:56:24.708401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.288 [2024-04-17 06:56:24.708562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.288 [2024-04-17 06:56:24.708587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.288 [2024-04-17 06:56:24.708603] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.288 [2024-04-17 06:56:24.708841] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.288 [2024-04-17 06:56:24.709052] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.288 [2024-04-17 06:56:24.709072] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.288 [2024-04-17 06:56:24.709084] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.288 [2024-04-17 06:56:24.712061] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.288 [2024-04-17 06:56:24.721049] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.288 [2024-04-17 06:56:24.721527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.288 [2024-04-17 06:56:24.721693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.288 [2024-04-17 06:56:24.721718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.288 [2024-04-17 06:56:24.721733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.288 [2024-04-17 06:56:24.721983] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.288 [2024-04-17 06:56:24.722205] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.288 [2024-04-17 06:56:24.722225] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.288 [2024-04-17 06:56:24.722238] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.288 [2024-04-17 06:56:24.725245] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.288 [2024-04-17 06:56:24.734315] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.288 [2024-04-17 06:56:24.734813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.288 [2024-04-17 06:56:24.734944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.288 [2024-04-17 06:56:24.734969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.288 [2024-04-17 06:56:24.734985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.288 [2024-04-17 06:56:24.735236] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.288 [2024-04-17 06:56:24.735463] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.288 [2024-04-17 06:56:24.735483] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.288 [2024-04-17 06:56:24.735510] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.288 [2024-04-17 06:56:24.738419] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.288 [2024-04-17 06:56:24.747580] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.288 [2024-04-17 06:56:24.748067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.288 [2024-04-17 06:56:24.748235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.288 [2024-04-17 06:56:24.748262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.288 [2024-04-17 06:56:24.748277] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.288 [2024-04-17 06:56:24.748519] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.288 [2024-04-17 06:56:24.748717] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.288 [2024-04-17 06:56:24.748735] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.288 [2024-04-17 06:56:24.748747] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.288 [2024-04-17 06:56:24.751742] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.288 [2024-04-17 06:56:24.760805] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.288 [2024-04-17 06:56:24.761222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.288 [2024-04-17 06:56:24.761413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.288 [2024-04-17 06:56:24.761439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.288 [2024-04-17 06:56:24.761454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.288 [2024-04-17 06:56:24.761706] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.288 [2024-04-17 06:56:24.761904] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.288 [2024-04-17 06:56:24.761923] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.288 [2024-04-17 06:56:24.761935] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.288 [2024-04-17 06:56:24.764938] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.288 [2024-04-17 06:56:24.774104] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.288 [2024-04-17 06:56:24.774559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.288 [2024-04-17 06:56:24.774729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.288 [2024-04-17 06:56:24.774755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.288 [2024-04-17 06:56:24.774770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.288 [2024-04-17 06:56:24.775012] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.288 [2024-04-17 06:56:24.775265] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.288 [2024-04-17 06:56:24.775287] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.288 [2024-04-17 06:56:24.775299] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.288 [2024-04-17 06:56:24.778272] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.288 [2024-04-17 06:56:24.787394] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.288 [2024-04-17 06:56:24.787879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.288 [2024-04-17 06:56:24.788040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.288 [2024-04-17 06:56:24.788065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.288 [2024-04-17 06:56:24.788081] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.288 [2024-04-17 06:56:24.788306] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.288 [2024-04-17 06:56:24.788555] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.288 [2024-04-17 06:56:24.788575] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.288 [2024-04-17 06:56:24.788587] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.288 [2024-04-17 06:56:24.791553] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.288 [2024-04-17 06:56:24.800676] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.288 [2024-04-17 06:56:24.801039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.288 [2024-04-17 06:56:24.801226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.288 [2024-04-17 06:56:24.801252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.288 [2024-04-17 06:56:24.801268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.288 [2024-04-17 06:56:24.801510] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.288 [2024-04-17 06:56:24.801724] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.288 [2024-04-17 06:56:24.801743] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.288 [2024-04-17 06:56:24.801755] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.288 [2024-04-17 06:56:24.804713] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.288 [2024-04-17 06:56:24.813956] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.288 [2024-04-17 06:56:24.814416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.288 [2024-04-17 06:56:24.814586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.288 [2024-04-17 06:56:24.814612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.288 [2024-04-17 06:56:24.814627] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.288 [2024-04-17 06:56:24.814880] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.288 [2024-04-17 06:56:24.815079] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.288 [2024-04-17 06:56:24.815097] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.288 [2024-04-17 06:56:24.815109] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.288 [2024-04-17 06:56:24.818132] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.289 [2024-04-17 06:56:24.827250] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.289 [2024-04-17 06:56:24.827715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.289 [2024-04-17 06:56:24.827868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.289 [2024-04-17 06:56:24.827893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.289 [2024-04-17 06:56:24.827908] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.289 [2024-04-17 06:56:24.828132] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.289 [2024-04-17 06:56:24.828386] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.289 [2024-04-17 06:56:24.828408] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.289 [2024-04-17 06:56:24.828421] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.289 [2024-04-17 06:56:24.831390] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.289 [2024-04-17 06:56:24.840510] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.289 [2024-04-17 06:56:24.840891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.289 [2024-04-17 06:56:24.841079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.289 [2024-04-17 06:56:24.841104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.289 [2024-04-17 06:56:24.841120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.289 [2024-04-17 06:56:24.841341] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.289 [2024-04-17 06:56:24.841582] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.289 [2024-04-17 06:56:24.841601] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.289 [2024-04-17 06:56:24.841613] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.289 [2024-04-17 06:56:24.844571] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.289 [2024-04-17 06:56:24.853772] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.289 [2024-04-17 06:56:24.854265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.289 [2024-04-17 06:56:24.854427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.289 [2024-04-17 06:56:24.854453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.289 [2024-04-17 06:56:24.854469] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.289 [2024-04-17 06:56:24.854694] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.289 [2024-04-17 06:56:24.854906] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.289 [2024-04-17 06:56:24.854926] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.289 [2024-04-17 06:56:24.854937] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.289 [2024-04-17 06:56:24.857939] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.289 [2024-04-17 06:56:24.867005] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.289 [2024-04-17 06:56:24.867477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.289 [2024-04-17 06:56:24.867655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.289 [2024-04-17 06:56:24.867680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.289 [2024-04-17 06:56:24.867700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.289 [2024-04-17 06:56:24.867939] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.289 [2024-04-17 06:56:24.868138] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.289 [2024-04-17 06:56:24.868157] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.289 [2024-04-17 06:56:24.868169] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.289 [2024-04-17 06:56:24.871243] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.289 [2024-04-17 06:56:24.880239] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.289 [2024-04-17 06:56:24.880703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.289 [2024-04-17 06:56:24.880864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.289 [2024-04-17 06:56:24.880889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.289 [2024-04-17 06:56:24.880905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.289 [2024-04-17 06:56:24.881117] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.289 [2024-04-17 06:56:24.881374] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.289 [2024-04-17 06:56:24.881396] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.289 [2024-04-17 06:56:24.881409] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.289 [2024-04-17 06:56:24.884734] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.548 [2024-04-17 06:56:24.894073] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.548 [2024-04-17 06:56:24.894512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.548 [2024-04-17 06:56:24.894658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.548 [2024-04-17 06:56:24.894684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.548 [2024-04-17 06:56:24.894700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.548 [2024-04-17 06:56:24.894941] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.548 [2024-04-17 06:56:24.895147] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.548 [2024-04-17 06:56:24.895190] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.548 [2024-04-17 06:56:24.895204] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.549 [2024-04-17 06:56:24.898320] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.549 [2024-04-17 06:56:24.907329] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.549 [2024-04-17 06:56:24.907736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.549 [2024-04-17 06:56:24.907922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.549 [2024-04-17 06:56:24.907948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.549 [2024-04-17 06:56:24.907964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.549 [2024-04-17 06:56:24.908216] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.549 [2024-04-17 06:56:24.908441] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.549 [2024-04-17 06:56:24.908462] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.549 [2024-04-17 06:56:24.908475] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.549 [2024-04-17 06:56:24.911470] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.549 [2024-04-17 06:56:24.920522] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.549 [2024-04-17 06:56:24.920924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.549 [2024-04-17 06:56:24.921096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.549 [2024-04-17 06:56:24.921121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.549 [2024-04-17 06:56:24.921137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.549 [2024-04-17 06:56:24.921373] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.549 [2024-04-17 06:56:24.921608] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.549 [2024-04-17 06:56:24.921627] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.549 [2024-04-17 06:56:24.921639] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.549 [2024-04-17 06:56:24.924590] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.549 [2024-04-17 06:56:24.933693] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.549 [2024-04-17 06:56:24.934109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.549 [2024-04-17 06:56:24.934295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.549 [2024-04-17 06:56:24.934321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.549 [2024-04-17 06:56:24.934337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.549 [2024-04-17 06:56:24.934573] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.549 [2024-04-17 06:56:24.934771] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.549 [2024-04-17 06:56:24.934790] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.549 [2024-04-17 06:56:24.934802] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.549 [2024-04-17 06:56:24.937767] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.549 [2024-04-17 06:56:24.946870] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.549 [2024-04-17 06:56:24.947313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.549 [2024-04-17 06:56:24.947474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.549 [2024-04-17 06:56:24.947499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.549 [2024-04-17 06:56:24.947515] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.549 [2024-04-17 06:56:24.947756] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.549 [2024-04-17 06:56:24.947959] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.549 [2024-04-17 06:56:24.947978] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.549 [2024-04-17 06:56:24.947990] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.549 [2024-04-17 06:56:24.950987] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.549 [2024-04-17 06:56:24.960075] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.549 [2024-04-17 06:56:24.960514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.549 [2024-04-17 06:56:24.960658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.549 [2024-04-17 06:56:24.960684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.549 [2024-04-17 06:56:24.960699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.549 [2024-04-17 06:56:24.960938] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.549 [2024-04-17 06:56:24.961169] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.549 [2024-04-17 06:56:24.961198] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.549 [2024-04-17 06:56:24.961211] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.549 [2024-04-17 06:56:24.964260] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.549 [2024-04-17 06:56:24.973365] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.549 [2024-04-17 06:56:24.973783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.549 [2024-04-17 06:56:24.973924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.549 [2024-04-17 06:56:24.973950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.549 [2024-04-17 06:56:24.973965] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.549 [2024-04-17 06:56:24.974218] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.549 [2024-04-17 06:56:24.974441] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.549 [2024-04-17 06:56:24.974461] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.549 [2024-04-17 06:56:24.974474] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.549 [2024-04-17 06:56:24.977429] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.549 [2024-04-17 06:56:24.986493] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.549 [2024-04-17 06:56:24.986973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.549 [2024-04-17 06:56:24.987103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.549 [2024-04-17 06:56:24.987128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.549 [2024-04-17 06:56:24.987143] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.549 [2024-04-17 06:56:24.987378] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.549 [2024-04-17 06:56:24.987598] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.549 [2024-04-17 06:56:24.987622] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.549 [2024-04-17 06:56:24.987635] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.549 [2024-04-17 06:56:24.990588] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.549 [2024-04-17 06:56:24.999754] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.549 [2024-04-17 06:56:25.000233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.549 [2024-04-17 06:56:25.000425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.549 [2024-04-17 06:56:25.000450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.549 [2024-04-17 06:56:25.000465] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.549 [2024-04-17 06:56:25.000689] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.549 [2024-04-17 06:56:25.000902] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.549 [2024-04-17 06:56:25.000921] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.549 [2024-04-17 06:56:25.000933] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.549 [2024-04-17 06:56:25.003913] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.549 [2024-04-17 06:56:25.012920] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.549 [2024-04-17 06:56:25.013382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.549 [2024-04-17 06:56:25.013578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.549 [2024-04-17 06:56:25.013604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.549 [2024-04-17 06:56:25.013619] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.549 [2024-04-17 06:56:25.013858] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.549 [2024-04-17 06:56:25.014071] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.549 [2024-04-17 06:56:25.014090] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.549 [2024-04-17 06:56:25.014102] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.549 [2024-04-17 06:56:25.017063] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.549 [2024-04-17 06:56:25.026147] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.549 [2024-04-17 06:56:25.026629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.549 [2024-04-17 06:56:25.026792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.550 [2024-04-17 06:56:25.026818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.550 [2024-04-17 06:56:25.026833] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.550 [2024-04-17 06:56:25.027072] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.550 [2024-04-17 06:56:25.027337] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.550 [2024-04-17 06:56:25.027358] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.550 [2024-04-17 06:56:25.027376] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.550 [2024-04-17 06:56:25.030350] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.550 [2024-04-17 06:56:25.039452] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.550 [2024-04-17 06:56:25.039871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.550 [2024-04-17 06:56:25.040034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.550 [2024-04-17 06:56:25.040059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.550 [2024-04-17 06:56:25.040074] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.550 [2024-04-17 06:56:25.040310] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.550 [2024-04-17 06:56:25.040534] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.550 [2024-04-17 06:56:25.040553] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.550 [2024-04-17 06:56:25.040565] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.550 [2024-04-17 06:56:25.043518] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.550 [2024-04-17 06:56:25.052737] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.550 [2024-04-17 06:56:25.053118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.550 [2024-04-17 06:56:25.053300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.550 [2024-04-17 06:56:25.053334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.550 [2024-04-17 06:56:25.053349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.550 [2024-04-17 06:56:25.053590] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.550 [2024-04-17 06:56:25.053787] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.550 [2024-04-17 06:56:25.053806] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.550 [2024-04-17 06:56:25.053818] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.550 [2024-04-17 06:56:25.056782] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.550 [2024-04-17 06:56:25.066011] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.550 [2024-04-17 06:56:25.066473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.550 [2024-04-17 06:56:25.066639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.550 [2024-04-17 06:56:25.066666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.550 [2024-04-17 06:56:25.066682] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.550 [2024-04-17 06:56:25.066919] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.550 [2024-04-17 06:56:25.067117] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.550 [2024-04-17 06:56:25.067135] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.550 [2024-04-17 06:56:25.067147] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.550 [2024-04-17 06:56:25.070173] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.550 [2024-04-17 06:56:25.079304] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.550 [2024-04-17 06:56:25.079800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.550 [2024-04-17 06:56:25.079984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.550 [2024-04-17 06:56:25.080010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.550 [2024-04-17 06:56:25.080025] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.550 [2024-04-17 06:56:25.080292] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.550 [2024-04-17 06:56:25.080517] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.550 [2024-04-17 06:56:25.080536] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.550 [2024-04-17 06:56:25.080548] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.550 [2024-04-17 06:56:25.083496] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.550 [2024-04-17 06:56:25.092548] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.550 [2024-04-17 06:56:25.092930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.550 [2024-04-17 06:56:25.093137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.550 [2024-04-17 06:56:25.093162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.550 [2024-04-17 06:56:25.093186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.550 [2024-04-17 06:56:25.093401] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.550 [2024-04-17 06:56:25.093649] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.550 [2024-04-17 06:56:25.093668] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.550 [2024-04-17 06:56:25.093680] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.550 [2024-04-17 06:56:25.096634] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.550 [2024-04-17 06:56:25.105838] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.550 [2024-04-17 06:56:25.106253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.550 [2024-04-17 06:56:25.106381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.550 [2024-04-17 06:56:25.106406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.550 [2024-04-17 06:56:25.106421] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.550 [2024-04-17 06:56:25.106661] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.550 [2024-04-17 06:56:25.106858] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.550 [2024-04-17 06:56:25.106877] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.550 [2024-04-17 06:56:25.106889] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.550 [2024-04-17 06:56:25.109849] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.550 [2024-04-17 06:56:25.119016] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.550 [2024-04-17 06:56:25.119413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.550 [2024-04-17 06:56:25.119580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.550 [2024-04-17 06:56:25.119605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.550 [2024-04-17 06:56:25.119620] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.550 [2024-04-17 06:56:25.119860] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.550 [2024-04-17 06:56:25.120057] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.550 [2024-04-17 06:56:25.120076] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.550 [2024-04-17 06:56:25.120088] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.550 [2024-04-17 06:56:25.123074] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.550 [2024-04-17 06:56:25.132166] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.550 [2024-04-17 06:56:25.132610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.550 [2024-04-17 06:56:25.132775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.550 [2024-04-17 06:56:25.132800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.550 [2024-04-17 06:56:25.132816] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.550 [2024-04-17 06:56:25.133043] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.550 [2024-04-17 06:56:25.133282] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.550 [2024-04-17 06:56:25.133303] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.550 [2024-04-17 06:56:25.133317] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.550 [2024-04-17 06:56:25.136652] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.550 [2024-04-17 06:56:25.145435] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.550 [2024-04-17 06:56:25.145891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.550 [2024-04-17 06:56:25.146070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.550 [2024-04-17 06:56:25.146095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.550 [2024-04-17 06:56:25.146111] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.550 [2024-04-17 06:56:25.146331] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.550 [2024-04-17 06:56:25.146585] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.551 [2024-04-17 06:56:25.146604] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.551 [2024-04-17 06:56:25.146616] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.551 [2024-04-17 06:56:25.149647] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.810 [2024-04-17 06:56:25.158798] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.810 [2024-04-17 06:56:25.159236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.810 [2024-04-17 06:56:25.159369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.810 [2024-04-17 06:56:25.159395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.810 [2024-04-17 06:56:25.159411] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.810 [2024-04-17 06:56:25.159683] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.810 [2024-04-17 06:56:25.159898] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.810 [2024-04-17 06:56:25.159919] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.810 [2024-04-17 06:56:25.159932] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.810 [2024-04-17 06:56:25.163183] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.810 [2024-04-17 06:56:25.172228] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.810 [2024-04-17 06:56:25.172646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.810 [2024-04-17 06:56:25.172835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.810 [2024-04-17 06:56:25.172861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.810 [2024-04-17 06:56:25.172877] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.810 [2024-04-17 06:56:25.173115] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.810 [2024-04-17 06:56:25.173361] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.810 [2024-04-17 06:56:25.173382] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.810 [2024-04-17 06:56:25.173395] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.810 [2024-04-17 06:56:25.176367] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.810 [2024-04-17 06:56:25.185386] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.810 [2024-04-17 06:56:25.185821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.810 [2024-04-17 06:56:25.185960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.810 [2024-04-17 06:56:25.185985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.810 [2024-04-17 06:56:25.186001] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.810 [2024-04-17 06:56:25.186264] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.810 [2024-04-17 06:56:25.186475] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.810 [2024-04-17 06:56:25.186495] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.810 [2024-04-17 06:56:25.186508] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.810 [2024-04-17 06:56:25.189462] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.810 [2024-04-17 06:56:25.198702] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.810 [2024-04-17 06:56:25.199076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.810 [2024-04-17 06:56:25.199247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.810 [2024-04-17 06:56:25.199278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.811 [2024-04-17 06:56:25.199295] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.811 [2024-04-17 06:56:25.199537] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.811 [2024-04-17 06:56:25.199735] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.811 [2024-04-17 06:56:25.199754] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.811 [2024-04-17 06:56:25.199766] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.811 [2024-04-17 06:56:25.202721] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.811 [2024-04-17 06:56:25.211949] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.811 [2024-04-17 06:56:25.212382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.811 [2024-04-17 06:56:25.212538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.811 [2024-04-17 06:56:25.212579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.811 [2024-04-17 06:56:25.212595] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.811 [2024-04-17 06:56:25.212829] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.811 [2024-04-17 06:56:25.213027] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.811 [2024-04-17 06:56:25.213045] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.811 [2024-04-17 06:56:25.213057] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.811 [2024-04-17 06:56:25.216052] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.811 [2024-04-17 06:56:25.225139] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.811 [2024-04-17 06:56:25.225537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.811 [2024-04-17 06:56:25.225671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.811 [2024-04-17 06:56:25.225697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.811 [2024-04-17 06:56:25.225712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.811 [2024-04-17 06:56:25.225960] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.811 [2024-04-17 06:56:25.226173] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.811 [2024-04-17 06:56:25.226201] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.811 [2024-04-17 06:56:25.226214] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.811 [2024-04-17 06:56:25.229200] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.811 [2024-04-17 06:56:25.238449] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.811 [2024-04-17 06:56:25.238822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.811 [2024-04-17 06:56:25.238991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.811 [2024-04-17 06:56:25.239017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.811 [2024-04-17 06:56:25.239037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.811 [2024-04-17 06:56:25.239307] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.811 [2024-04-17 06:56:25.239517] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.811 [2024-04-17 06:56:25.239537] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.811 [2024-04-17 06:56:25.239550] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.811 [2024-04-17 06:56:25.242539] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.811 [2024-04-17 06:56:25.251772] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.811 [2024-04-17 06:56:25.252188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.811 [2024-04-17 06:56:25.252355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.811 [2024-04-17 06:56:25.252381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.811 [2024-04-17 06:56:25.252397] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.811 [2024-04-17 06:56:25.252618] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.811 [2024-04-17 06:56:25.252832] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.811 [2024-04-17 06:56:25.252850] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.811 [2024-04-17 06:56:25.252862] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.811 [2024-04-17 06:56:25.255909] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.811 [2024-04-17 06:56:25.265074] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.811 [2024-04-17 06:56:25.265489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.811 [2024-04-17 06:56:25.265665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.811 [2024-04-17 06:56:25.265691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.811 [2024-04-17 06:56:25.265706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.811 [2024-04-17 06:56:25.265946] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.811 [2024-04-17 06:56:25.266174] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.811 [2024-04-17 06:56:25.266200] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.811 [2024-04-17 06:56:25.266213] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.811 [2024-04-17 06:56:25.269315] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.811 [2024-04-17 06:56:25.278351] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.811 [2024-04-17 06:56:25.278796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.811 [2024-04-17 06:56:25.278964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.811 [2024-04-17 06:56:25.278989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.811 [2024-04-17 06:56:25.279004] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.811 [2024-04-17 06:56:25.279281] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.811 [2024-04-17 06:56:25.279506] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.811 [2024-04-17 06:56:25.279525] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.811 [2024-04-17 06:56:25.279537] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.811 [2024-04-17 06:56:25.282604] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.811 [2024-04-17 06:56:25.291551] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.811 [2024-04-17 06:56:25.292037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.811 [2024-04-17 06:56:25.292208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.811 [2024-04-17 06:56:25.292234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.811 [2024-04-17 06:56:25.292250] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.811 [2024-04-17 06:56:25.292489] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.811 [2024-04-17 06:56:25.292702] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.811 [2024-04-17 06:56:25.292723] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.811 [2024-04-17 06:56:25.292736] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.811 [2024-04-17 06:56:25.295761] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.811 [2024-04-17 06:56:25.304746] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.811 [2024-04-17 06:56:25.305197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.811 [2024-04-17 06:56:25.305359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.811 [2024-04-17 06:56:25.305385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.811 [2024-04-17 06:56:25.305400] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.811 [2024-04-17 06:56:25.305626] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.811 [2024-04-17 06:56:25.305839] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.811 [2024-04-17 06:56:25.305859] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.811 [2024-04-17 06:56:25.305872] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.811 [2024-04-17 06:56:25.309090] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.811 [2024-04-17 06:56:25.318018] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.811 [2024-04-17 06:56:25.318415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.811 [2024-04-17 06:56:25.318573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.811 [2024-04-17 06:56:25.318598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.811 [2024-04-17 06:56:25.318613] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.811 [2024-04-17 06:56:25.318856] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.811 [2024-04-17 06:56:25.319073] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.812 [2024-04-17 06:56:25.319092] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.812 [2024-04-17 06:56:25.319104] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.812 [2024-04-17 06:56:25.322149] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.812 [2024-04-17 06:56:25.331289] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.812 [2024-04-17 06:56:25.331787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.812 [2024-04-17 06:56:25.331945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.812 [2024-04-17 06:56:25.331970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.812 [2024-04-17 06:56:25.331985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.812 [2024-04-17 06:56:25.332233] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.812 [2024-04-17 06:56:25.332436] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.812 [2024-04-17 06:56:25.332456] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.812 [2024-04-17 06:56:25.332468] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.812 [2024-04-17 06:56:25.335466] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.812 [2024-04-17 06:56:25.344556] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.812 [2024-04-17 06:56:25.344966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.812 [2024-04-17 06:56:25.345099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.812 [2024-04-17 06:56:25.345124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.812 [2024-04-17 06:56:25.345140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.812 [2024-04-17 06:56:25.345359] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.812 [2024-04-17 06:56:25.345594] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.812 [2024-04-17 06:56:25.345613] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.812 [2024-04-17 06:56:25.345625] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.812 [2024-04-17 06:56:25.348583] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.812 [2024-04-17 06:56:25.357844] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.812 [2024-04-17 06:56:25.358278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.812 [2024-04-17 06:56:25.358410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.812 [2024-04-17 06:56:25.358436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.812 [2024-04-17 06:56:25.358451] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.812 [2024-04-17 06:56:25.358688] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.812 [2024-04-17 06:56:25.358903] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.812 [2024-04-17 06:56:25.358926] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.812 [2024-04-17 06:56:25.358939] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.812 [2024-04-17 06:56:25.361920] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.812 [2024-04-17 06:56:25.371094] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.812 [2024-04-17 06:56:25.371498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.812 [2024-04-17 06:56:25.371640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.812 [2024-04-17 06:56:25.371665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.812 [2024-04-17 06:56:25.371681] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.812 [2024-04-17 06:56:25.371915] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.812 [2024-04-17 06:56:25.372112] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.812 [2024-04-17 06:56:25.372131] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.812 [2024-04-17 06:56:25.372143] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.812 [2024-04-17 06:56:25.375143] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.812 [2024-04-17 06:56:25.384418] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.812 [2024-04-17 06:56:25.384826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.812 [2024-04-17 06:56:25.384993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.812 [2024-04-17 06:56:25.385018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.812 [2024-04-17 06:56:25.385034] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.812 [2024-04-17 06:56:25.385255] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.812 [2024-04-17 06:56:25.385472] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.812 [2024-04-17 06:56:25.385500] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.812 [2024-04-17 06:56:25.385514] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.812 [2024-04-17 06:56:25.388858] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.812 [2024-04-17 06:56:25.397767] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.812 [2024-04-17 06:56:25.398250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.812 [2024-04-17 06:56:25.398415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.812 [2024-04-17 06:56:25.398440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.812 [2024-04-17 06:56:25.398456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.812 [2024-04-17 06:56:25.398695] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.812 [2024-04-17 06:56:25.398909] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.812 [2024-04-17 06:56:25.398927] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.812 [2024-04-17 06:56:25.398944] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.812 [2024-04-17 06:56:25.401969] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.812 [2024-04-17 06:56:25.411008] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.812 [2024-04-17 06:56:25.411430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.812 [2024-04-17 06:56:25.411569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.812 [2024-04-17 06:56:25.411595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:20.812 [2024-04-17 06:56:25.411610] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:20.812 [2024-04-17 06:56:25.411850] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:20.812 [2024-04-17 06:56:25.412064] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.812 [2024-04-17 06:56:25.412083] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.812 [2024-04-17 06:56:25.412095] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.812 [2024-04-17 06:56:25.415634] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.072 [2024-04-17 06:56:25.424654] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.072 [2024-04-17 06:56:25.425080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-04-17 06:56:25.425236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-04-17 06:56:25.425264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.072 [2024-04-17 06:56:25.425281] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.072 [2024-04-17 06:56:25.425515] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.072 [2024-04-17 06:56:25.425713] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.072 [2024-04-17 06:56:25.425732] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.072 [2024-04-17 06:56:25.425744] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.072 [2024-04-17 06:56:25.428774] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.072 [2024-04-17 06:56:25.437860] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.072 [2024-04-17 06:56:25.438271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-04-17 06:56:25.438433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-04-17 06:56:25.438458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.072 [2024-04-17 06:56:25.438474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.072 [2024-04-17 06:56:25.438714] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.072 [2024-04-17 06:56:25.438927] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.072 [2024-04-17 06:56:25.438946] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.072 [2024-04-17 06:56:25.438958] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.072 [2024-04-17 06:56:25.441977] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.072 [2024-04-17 06:56:25.451666] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.072 [2024-04-17 06:56:25.452121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-04-17 06:56:25.452295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-04-17 06:56:25.452321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.072 [2024-04-17 06:56:25.452337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.072 [2024-04-17 06:56:25.452586] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.072 [2024-04-17 06:56:25.452779] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.072 [2024-04-17 06:56:25.452797] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.072 [2024-04-17 06:56:25.452809] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.072 [2024-04-17 06:56:25.456292] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.072 [2024-04-17 06:56:25.465498] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.072 [2024-04-17 06:56:25.465921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-04-17 06:56:25.466090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-04-17 06:56:25.466115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.072 [2024-04-17 06:56:25.466131] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.072 [2024-04-17 06:56:25.466389] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.072 [2024-04-17 06:56:25.466600] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.072 [2024-04-17 06:56:25.466620] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.072 [2024-04-17 06:56:25.466633] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.072 [2024-04-17 06:56:25.470097] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.072 [2024-04-17 06:56:25.479496] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.072 [2024-04-17 06:56:25.479914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-04-17 06:56:25.480113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-04-17 06:56:25.480139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.072 [2024-04-17 06:56:25.480154] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.072 [2024-04-17 06:56:25.480400] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.072 [2024-04-17 06:56:25.480611] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.072 [2024-04-17 06:56:25.480629] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.072 [2024-04-17 06:56:25.480642] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.072 [2024-04-17 06:56:25.484108] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.072 [2024-04-17 06:56:25.493471] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.072 [2024-04-17 06:56:25.494054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-04-17 06:56:25.494323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-04-17 06:56:25.494350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.072 [2024-04-17 06:56:25.494365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.072 [2024-04-17 06:56:25.494602] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.073 [2024-04-17 06:56:25.494793] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.073 [2024-04-17 06:56:25.494811] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.073 [2024-04-17 06:56:25.494823] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.073 [2024-04-17 06:56:25.498303] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.073 [2024-04-17 06:56:25.507475] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.073 [2024-04-17 06:56:25.507849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-04-17 06:56:25.508073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-04-17 06:56:25.508101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.073 [2024-04-17 06:56:25.508118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.073 [2024-04-17 06:56:25.508395] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.073 [2024-04-17 06:56:25.508625] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.073 [2024-04-17 06:56:25.508643] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.073 [2024-04-17 06:56:25.508655] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.073 [2024-04-17 06:56:25.512121] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.073 [2024-04-17 06:56:25.521322] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.073 [2024-04-17 06:56:25.521718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-04-17 06:56:25.521888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-04-17 06:56:25.521914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.073 [2024-04-17 06:56:25.521944] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.073 [2024-04-17 06:56:25.522197] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.073 [2024-04-17 06:56:25.522408] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.073 [2024-04-17 06:56:25.522428] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.073 [2024-04-17 06:56:25.522440] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.073 [2024-04-17 06:56:25.525925] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.073 [2024-04-17 06:56:25.535316] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.073 [2024-04-17 06:56:25.535777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-04-17 06:56:25.535971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-04-17 06:56:25.535997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.073 [2024-04-17 06:56:25.536012] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.073 [2024-04-17 06:56:25.536268] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.073 [2024-04-17 06:56:25.536466] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.073 [2024-04-17 06:56:25.536485] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.073 [2024-04-17 06:56:25.536497] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.073 [2024-04-17 06:56:25.539973] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.073 [2024-04-17 06:56:25.549133] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.073 [2024-04-17 06:56:25.549662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-04-17 06:56:25.549858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-04-17 06:56:25.549883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.073 [2024-04-17 06:56:25.549899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.073 [2024-04-17 06:56:25.550136] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.073 [2024-04-17 06:56:25.550372] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.073 [2024-04-17 06:56:25.550392] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.073 [2024-04-17 06:56:25.550404] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.073 [2024-04-17 06:56:25.553892] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.073 [2024-04-17 06:56:25.563050] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.073 [2024-04-17 06:56:25.563569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-04-17 06:56:25.563781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-04-17 06:56:25.563807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.073 [2024-04-17 06:56:25.563822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.073 [2024-04-17 06:56:25.564072] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.073 [2024-04-17 06:56:25.564330] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.073 [2024-04-17 06:56:25.564365] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.073 [2024-04-17 06:56:25.564378] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.073 [2024-04-17 06:56:25.567885] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.073 [2024-04-17 06:56:25.577056] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.073 [2024-04-17 06:56:25.577488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-04-17 06:56:25.577754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-04-17 06:56:25.577779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.073 [2024-04-17 06:56:25.577799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.073 [2024-04-17 06:56:25.578044] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.073 [2024-04-17 06:56:25.578281] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.073 [2024-04-17 06:56:25.578301] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.073 [2024-04-17 06:56:25.578314] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.073 [2024-04-17 06:56:25.581877] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.073 [2024-04-17 06:56:25.591071] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.073 [2024-04-17 06:56:25.591555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-04-17 06:56:25.591721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-04-17 06:56:25.591745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.073 [2024-04-17 06:56:25.591760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.073 [2024-04-17 06:56:25.591993] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.073 [2024-04-17 06:56:25.592225] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.073 [2024-04-17 06:56:25.592245] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.073 [2024-04-17 06:56:25.592258] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.073 [2024-04-17 06:56:25.595756] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 119101 Killed "${NVMF_APP[@]}" "$@" 00:30:21.073 [2024-04-17 06:56:25.604905] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.073 06:56:25 -- host/bdevperf.sh@36 -- # tgt_init 00:30:21.073 06:56:25 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:21.073 06:56:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:30:21.073 [2024-04-17 06:56:25.605359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 06:56:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:21.073 [2024-04-17 06:56:25.605551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-04-17 06:56:25.605577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.073 [2024-04-17 06:56:25.605592] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.073 06:56:25 -- common/autotest_common.sh@10 -- # set +x 00:30:21.073 [2024-04-17 06:56:25.605844] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.073 [2024-04-17 06:56:25.606058] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.073 [2024-04-17 06:56:25.606078] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.073 [2024-04-17 06:56:25.606091] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.073 06:56:25 -- nvmf/common.sh@470 -- # nvmfpid=120111 00:30:21.073 06:56:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:21.073 06:56:25 -- nvmf/common.sh@471 -- # waitforlisten 120111 00:30:21.073 [2024-04-17 06:56:25.609648] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.073 06:56:25 -- common/autotest_common.sh@817 -- # '[' -z 120111 ']' 00:30:21.073 06:56:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:21.073 06:56:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:21.074 06:56:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:21.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:21.074 06:56:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:21.074 06:56:25 -- common/autotest_common.sh@10 -- # set +x 00:30:21.074 [2024-04-17 06:56:25.618831] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.074 [2024-04-17 06:56:25.619291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-04-17 06:56:25.619475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-04-17 06:56:25.619501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.074 [2024-04-17 06:56:25.619516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.074 [2024-04-17 06:56:25.619766] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.074 [2024-04-17 06:56:25.619964] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.074 [2024-04-17 06:56:25.619984] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.074 [2024-04-17 06:56:25.619997] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.074 [2024-04-17 06:56:25.623543] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.074 [2024-04-17 06:56:25.632067] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.074 [2024-04-17 06:56:25.632474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-04-17 06:56:25.632625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-04-17 06:56:25.632650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.074 [2024-04-17 06:56:25.632666] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.074 [2024-04-17 06:56:25.632889] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.074 [2024-04-17 06:56:25.633101] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.074 [2024-04-17 06:56:25.633119] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.074 [2024-04-17 06:56:25.633132] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.074 [2024-04-17 06:56:25.636274] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.074 [2024-04-17 06:56:25.645576] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.074 [2024-04-17 06:56:25.645961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-04-17 06:56:25.646147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-04-17 06:56:25.646173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.074 [2024-04-17 06:56:25.646198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.074 [2024-04-17 06:56:25.646411] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.074 [2024-04-17 06:56:25.646640] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.074 [2024-04-17 06:56:25.646660] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.074 [2024-04-17 06:56:25.646672] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.074 [2024-04-17 06:56:25.649873] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.074 [2024-04-17 06:56:25.658847] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.074 [2024-04-17 06:56:25.659294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-04-17 06:56:25.659373] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:30:21.074 [2024-04-17 06:56:25.659453] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:21.074 [2024-04-17 06:56:25.659465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-04-17 06:56:25.659492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.074 [2024-04-17 06:56:25.659507] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.074 [2024-04-17 06:56:25.659746] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.074 [2024-04-17 06:56:25.659943] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.074 [2024-04-17 06:56:25.659962] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.074 [2024-04-17 06:56:25.659975] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.074 [2024-04-17 06:56:25.663075] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.074 [2024-04-17 06:56:25.672347] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.074 [2024-04-17 06:56:25.672808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-04-17 06:56:25.673009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-04-17 06:56:25.673034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.074 [2024-04-17 06:56:25.673049] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.074 [2024-04-17 06:56:25.673271] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.074 [2024-04-17 06:56:25.673510] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.074 [2024-04-17 06:56:25.673529] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.074 [2024-04-17 06:56:25.673542] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.074 [2024-04-17 06:56:25.676998] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.334 [2024-04-17 06:56:25.686070] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.334 [2024-04-17 06:56:25.686498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.334 [2024-04-17 06:56:25.686686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.334 [2024-04-17 06:56:25.686712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.334 [2024-04-17 06:56:25.686728] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.334 [2024-04-17 06:56:25.686984] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.334 [2024-04-17 06:56:25.687208] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.334 [2024-04-17 06:56:25.687230] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.334 [2024-04-17 06:56:25.687243] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.334 [2024-04-17 06:56:25.690221] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.334 [2024-04-17 06:56:25.699346] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.334 [2024-04-17 06:56:25.699743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.334 [2024-04-17 06:56:25.699965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.334 [2024-04-17 06:56:25.699991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.334 [2024-04-17 06:56:25.700007] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.334 [2024-04-17 06:56:25.700260] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.334 [2024-04-17 06:56:25.700487] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.334 [2024-04-17 06:56:25.700507] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.334 [2024-04-17 06:56:25.700535] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.334 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.334 [2024-04-17 06:56:25.703537] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.334 [2024-04-17 06:56:25.712690] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.334 [2024-04-17 06:56:25.713122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.334 [2024-04-17 06:56:25.713308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.334 [2024-04-17 06:56:25.713334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.334 [2024-04-17 06:56:25.713350] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.334 [2024-04-17 06:56:25.713589] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.334 [2024-04-17 06:56:25.713793] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.334 [2024-04-17 06:56:25.713812] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.334 [2024-04-17 06:56:25.713825] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.334 [2024-04-17 06:56:25.717008] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.334 [2024-04-17 06:56:25.726028] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.334 [2024-04-17 06:56:25.726463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.334 [2024-04-17 06:56:25.726628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.334 [2024-04-17 06:56:25.726653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.334 [2024-04-17 06:56:25.726669] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.334 [2024-04-17 06:56:25.726918] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.334 [2024-04-17 06:56:25.727122] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.334 [2024-04-17 06:56:25.727141] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.334 [2024-04-17 06:56:25.727169] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.334 [2024-04-17 06:56:25.730277] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.334 [2024-04-17 06:56:25.735757] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:21.334 [2024-04-17 06:56:25.739534] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.334 [2024-04-17 06:56:25.740050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.334 [2024-04-17 06:56:25.740240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.334 [2024-04-17 06:56:25.740266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.334 [2024-04-17 06:56:25.740283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.334 [2024-04-17 06:56:25.740526] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.334 [2024-04-17 06:56:25.740732] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.334 [2024-04-17 06:56:25.740752] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.334 [2024-04-17 06:56:25.740765] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.334 [2024-04-17 06:56:25.743903] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.334 [2024-04-17 06:56:25.752994] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.334 [2024-04-17 06:56:25.753582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.334 [2024-04-17 06:56:25.753814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.334 [2024-04-17 06:56:25.753841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.334 [2024-04-17 06:56:25.753860] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.334 [2024-04-17 06:56:25.754113] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.334 [2024-04-17 06:56:25.754357] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.334 [2024-04-17 06:56:25.754379] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.334 [2024-04-17 06:56:25.754395] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.334 [2024-04-17 06:56:25.757509] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.334 [2024-04-17 06:56:25.766335] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.334 [2024-04-17 06:56:25.766738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.334 [2024-04-17 06:56:25.766933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.334 [2024-04-17 06:56:25.766958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.334 [2024-04-17 06:56:25.766974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.334 [2024-04-17 06:56:25.767226] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.334 [2024-04-17 06:56:25.767460] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.334 [2024-04-17 06:56:25.767481] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.334 [2024-04-17 06:56:25.767494] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.334 [2024-04-17 06:56:25.770501] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.334 [2024-04-17 06:56:25.779661] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.334 [2024-04-17 06:56:25.780082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.334 [2024-04-17 06:56:25.780236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.334 [2024-04-17 06:56:25.780263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.334 [2024-04-17 06:56:25.780280] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.334 [2024-04-17 06:56:25.780522] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.334 [2024-04-17 06:56:25.780727] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.334 [2024-04-17 06:56:25.780746] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.334 [2024-04-17 06:56:25.780759] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.334 [2024-04-17 06:56:25.783801] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.334 [2024-04-17 06:56:25.793076] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.334 [2024-04-17 06:56:25.793758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.334 [2024-04-17 06:56:25.793925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.334 [2024-04-17 06:56:25.793952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.334 [2024-04-17 06:56:25.793971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.334 [2024-04-17 06:56:25.794227] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.334 [2024-04-17 06:56:25.794435] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.334 [2024-04-17 06:56:25.794455] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.334 [2024-04-17 06:56:25.794471] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.335 [2024-04-17 06:56:25.797556] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.335 [2024-04-17 06:56:25.806601] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.335 [2024-04-17 06:56:25.807065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.335 [2024-04-17 06:56:25.807221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.335 [2024-04-17 06:56:25.807247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.335 [2024-04-17 06:56:25.807264] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.335 [2024-04-17 06:56:25.807495] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.335 [2024-04-17 06:56:25.807716] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.335 [2024-04-17 06:56:25.807744] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.335 [2024-04-17 06:56:25.807758] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.335 [2024-04-17 06:56:25.810799] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.335 [2024-04-17 06:56:25.820001] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.335 [2024-04-17 06:56:25.820447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.335 [2024-04-17 06:56:25.820598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.335 [2024-04-17 06:56:25.820623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.335 [2024-04-17 06:56:25.820639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.335 [2024-04-17 06:56:25.820877] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.335 [2024-04-17 06:56:25.821081] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.335 [2024-04-17 06:56:25.821100] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.335 [2024-04-17 06:56:25.821113] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.335 [2024-04-17 06:56:25.824173] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.335 [2024-04-17 06:56:25.826615] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:21.335 [2024-04-17 06:56:25.826652] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:21.335 [2024-04-17 06:56:25.826666] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:21.335 [2024-04-17 06:56:25.826678] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:21.335 [2024-04-17 06:56:25.826688] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:21.335 [2024-04-17 06:56:25.826882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:21.335 [2024-04-17 06:56:25.826941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:21.335 [2024-04-17 06:56:25.826944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:21.335 [2024-04-17 06:56:25.833630] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.335 [2024-04-17 06:56:25.834114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.335 [2024-04-17 06:56:25.834299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.335 [2024-04-17 06:56:25.834326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.335 [2024-04-17 06:56:25.834344] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.335 [2024-04-17 06:56:25.834564] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.335 [2024-04-17 06:56:25.834784] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.335 [2024-04-17 06:56:25.834806] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.335 [2024-04-17 06:56:25.834821] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.335 [2024-04-17 06:56:25.838027] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.335 [2024-04-17 06:56:25.847190] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.335 [2024-04-17 06:56:25.847770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.335 [2024-04-17 06:56:25.847940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.335 [2024-04-17 06:56:25.847966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.335 [2024-04-17 06:56:25.847985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.335 [2024-04-17 06:56:25.848216] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.335 [2024-04-17 06:56:25.848437] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.335 [2024-04-17 06:56:25.848458] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.335 [2024-04-17 06:56:25.848474] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.335 [2024-04-17 06:56:25.851668] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.335 [2024-04-17 06:56:25.860636] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.335 [2024-04-17 06:56:25.861232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.335 [2024-04-17 06:56:25.861446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.335 [2024-04-17 06:56:25.861473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.335 [2024-04-17 06:56:25.861492] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.335 [2024-04-17 06:56:25.861733] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.335 [2024-04-17 06:56:25.861949] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.335 [2024-04-17 06:56:25.861970] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.335 [2024-04-17 06:56:25.861986] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.335 [2024-04-17 06:56:25.865172] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.335 [2024-04-17 06:56:25.874192] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.335 [2024-04-17 06:56:25.874756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.335 [2024-04-17 06:56:25.874960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.335 [2024-04-17 06:56:25.874986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.335 [2024-04-17 06:56:25.875005] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.335 [2024-04-17 06:56:25.875268] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.335 [2024-04-17 06:56:25.875491] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.335 [2024-04-17 06:56:25.875512] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.335 [2024-04-17 06:56:25.875544] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.335 [2024-04-17 06:56:25.878691] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.335 [2024-04-17 06:56:25.887682] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.335 [2024-04-17 06:56:25.888165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.335 [2024-04-17 06:56:25.888371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.335 [2024-04-17 06:56:25.888405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.335 [2024-04-17 06:56:25.888423] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.335 [2024-04-17 06:56:25.888659] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.335 [2024-04-17 06:56:25.888874] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.335 [2024-04-17 06:56:25.888895] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.335 [2024-04-17 06:56:25.888910] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.335 [2024-04-17 06:56:25.892170] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.335 [2024-04-17 06:56:25.901230] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.335 [2024-04-17 06:56:25.901722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.335 [2024-04-17 06:56:25.901894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.335 [2024-04-17 06:56:25.901921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.335 [2024-04-17 06:56:25.901939] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.335 [2024-04-17 06:56:25.902185] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.335 [2024-04-17 06:56:25.902402] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.335 [2024-04-17 06:56:25.902422] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.335 [2024-04-17 06:56:25.902438] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.335 [2024-04-17 06:56:25.905615] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.335 [2024-04-17 06:56:25.914734] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.335 [2024-04-17 06:56:25.915222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.335 [2024-04-17 06:56:25.915368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.335 [2024-04-17 06:56:25.915395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.335 [2024-04-17 06:56:25.915413] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.335 [2024-04-17 06:56:25.915649] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.336 [2024-04-17 06:56:25.915863] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.336 [2024-04-17 06:56:25.915883] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.336 [2024-04-17 06:56:25.915898] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.336 [2024-04-17 06:56:25.918994] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.336 [2024-04-17 06:56:25.928228] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.336 [2024-04-17 06:56:25.928637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.336 [2024-04-17 06:56:25.928775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.336 [2024-04-17 06:56:25.928801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.336 [2024-04-17 06:56:25.928828] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.336 [2024-04-17 06:56:25.929055] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.336 [2024-04-17 06:56:25.929295] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.336 [2024-04-17 06:56:25.929316] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.336 [2024-04-17 06:56:25.929329] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.336 [2024-04-17 06:56:25.932561] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.595 [2024-04-17 06:56:25.941927] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.595 [2024-04-17 06:56:25.942352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-04-17 06:56:25.942487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-04-17 06:56:25.942514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.595 [2024-04-17 06:56:25.942530] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.595 [2024-04-17 06:56:25.942743] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.595 [2024-04-17 06:56:25.942961] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.595 [2024-04-17 06:56:25.942982] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.595 [2024-04-17 06:56:25.942996] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.595 06:56:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:21.595 06:56:25 -- common/autotest_common.sh@850 -- # return 0 00:30:21.595 06:56:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:30:21.595 06:56:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:21.595 06:56:25 -- common/autotest_common.sh@10 -- # set +x 00:30:21.595 [2024-04-17 06:56:25.946467] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.595 [2024-04-17 06:56:25.955573] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.595 [2024-04-17 06:56:25.955984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-04-17 06:56:25.956158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-04-17 06:56:25.956190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.595 [2024-04-17 06:56:25.956207] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.595 [2024-04-17 06:56:25.956430] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.595 [2024-04-17 06:56:25.956657] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.595 [2024-04-17 06:56:25.956678] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.595 [2024-04-17 06:56:25.956691] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.595 [2024-04-17 06:56:25.959916] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.595 [2024-04-17 06:56:25.969037] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.595 06:56:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:21.595 06:56:25 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:21.595 [2024-04-17 06:56:25.969438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-04-17 06:56:25.969642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-04-17 06:56:25.969668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.595 [2024-04-17 06:56:25.969684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.595 06:56:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:21.595 [2024-04-17 06:56:25.969912] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.595 06:56:25 -- common/autotest_common.sh@10 -- # set +x 00:30:21.595 [2024-04-17 06:56:25.970124] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.595 [2024-04-17 06:56:25.970145] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.595 [2024-04-17 06:56:25.970172] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.595 [2024-04-17 06:56:25.973410] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:21.595 [2024-04-17 06:56:25.973495] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.595 06:56:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:21.595 06:56:25 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:21.595 06:56:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:21.595 06:56:25 -- common/autotest_common.sh@10 -- # set +x 00:30:21.595 [2024-04-17 06:56:25.982493] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.595 [2024-04-17 06:56:25.982887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-04-17 06:56:25.983056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-04-17 06:56:25.983081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.595 [2024-04-17 06:56:25.983096] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.595 [2024-04-17 06:56:25.983323] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.595 [2024-04-17 06:56:25.983561] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.595 [2024-04-17 06:56:25.983580] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.595 [2024-04-17 06:56:25.983593] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.595 [2024-04-17 06:56:25.986666] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.595 [2024-04-17 06:56:25.995955] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.595 [2024-04-17 06:56:25.996402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-04-17 06:56:25.996567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-04-17 06:56:25.996593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.595 [2024-04-17 06:56:25.996609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.595 [2024-04-17 06:56:25.996822] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.595 [2024-04-17 06:56:25.997049] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.595 [2024-04-17 06:56:25.997069] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.595 [2024-04-17 06:56:25.997091] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.595 [2024-04-17 06:56:26.000317] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.595 [2024-04-17 06:56:26.009412] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.595 [2024-04-17 06:56:26.009954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-04-17 06:56:26.010124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-04-17 06:56:26.010151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.595 [2024-04-17 06:56:26.010170] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.595 [2024-04-17 06:56:26.010421] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.595 [2024-04-17 06:56:26.010636] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.595 [2024-04-17 06:56:26.010657] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.595 [2024-04-17 06:56:26.010673] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.595 [2024-04-17 06:56:26.013809] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.595 Malloc0 00:30:21.595 06:56:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:21.595 06:56:26 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:21.595 06:56:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:21.595 06:56:26 -- common/autotest_common.sh@10 -- # set +x 00:30:21.595 [2024-04-17 06:56:26.022864] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.595 [2024-04-17 06:56:26.023270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-04-17 06:56:26.023444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-04-17 06:56:26.023470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.596 [2024-04-17 06:56:26.023486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.596 [2024-04-17 06:56:26.023701] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.596 [2024-04-17 06:56:26.023927] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.596 [2024-04-17 06:56:26.023947] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.596 [2024-04-17 06:56:26.023960] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.596 06:56:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:21.596 06:56:26 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:21.596 06:56:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:21.596 06:56:26 -- common/autotest_common.sh@10 -- # set +x 00:30:21.596 [2024-04-17 06:56:26.027198] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.596 06:56:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:21.596 06:56:26 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:21.596 06:56:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:21.596 06:56:26 -- common/autotest_common.sh@10 -- # set +x 00:30:21.596 [2024-04-17 06:56:26.036376] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.596 [2024-04-17 06:56:26.036775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-04-17 06:56:26.036941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-04-17 06:56:26.036973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd61040 with addr=10.0.0.2, port=4420 00:30:21.596 [2024-04-17 06:56:26.036990] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61040 is same with the state(5) to be set 00:30:21.596 [2024-04-17 06:56:26.037223] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd61040 (9): Bad file descriptor 00:30:21.596 [2024-04-17 06:56:26.037434] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.596 [2024-04-17 06:56:26.037454] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.596 [2024-04-17 06:56:26.037467] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.596 [2024-04-17 06:56:26.037558] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:21.596 [2024-04-17 06:56:26.040689] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.596 06:56:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:21.596 06:56:26 -- host/bdevperf.sh@38 -- # wait 119331 00:30:21.596 [2024-04-17 06:56:26.049855] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.596 [2024-04-17 06:56:26.175139] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:31.564 00:30:31.564 Latency(us) 00:30:31.564 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:31.564 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:31.564 Verification LBA range: start 0x0 length 0x4000 00:30:31.564 Nvme1n1 : 15.01 6566.15 25.65 9241.33 0.00 8072.77 1104.40 23398.78 00:30:31.564 =================================================================================================================== 00:30:31.564 Total : 6566.15 25.65 9241.33 0.00 8072.77 1104.40 23398.78 00:30:31.564 06:56:35 -- host/bdevperf.sh@39 -- # sync 00:30:31.564 06:56:35 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:31.564 06:56:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.564 06:56:35 -- common/autotest_common.sh@10 -- # set +x 00:30:31.564 06:56:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.564 06:56:35 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:31.564 06:56:35 -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:31.564 06:56:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:31.564 06:56:35 -- nvmf/common.sh@117 -- # sync 00:30:31.564 06:56:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:31.564 06:56:35 -- nvmf/common.sh@120 -- # set +e 00:30:31.564 06:56:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:31.564 06:56:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:31.564 rmmod nvme_tcp 00:30:31.564 rmmod nvme_fabrics 00:30:31.564 rmmod nvme_keyring 00:30:31.564 06:56:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:31.564 06:56:35 -- nvmf/common.sh@124 -- # set -e 00:30:31.564 06:56:35 -- nvmf/common.sh@125 -- # return 0 00:30:31.564 06:56:35 -- nvmf/common.sh@478 -- # '[' -n 120111 ']' 00:30:31.564 06:56:35 -- nvmf/common.sh@479 -- # killprocess 120111 00:30:31.564 06:56:35 -- common/autotest_common.sh@936 -- # '[' -z 120111 ']' 00:30:31.564 06:56:35 -- common/autotest_common.sh@940 -- # kill -0 120111 00:30:31.564 06:56:35 -- common/autotest_common.sh@941 -- # uname 00:30:31.564 06:56:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:31.564 06:56:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 120111 00:30:31.564 06:56:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:31.564 06:56:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:31.564 06:56:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 120111' 00:30:31.564 killing process with pid 120111 00:30:31.564 06:56:35 -- common/autotest_common.sh@955 -- # kill 120111 00:30:31.564 06:56:35 -- common/autotest_common.sh@960 -- # wait 120111 00:30:31.564 06:56:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:30:31.564 06:56:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:31.564 06:56:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:31.564 06:56:35 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:31.564 06:56:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:31.564 06:56:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.564 06:56:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:31.564 06:56:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:33.464 06:56:37 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:33.464 00:30:33.464 real 0m22.400s 00:30:33.464 user 0m59.798s 00:30:33.464 sys 0m4.366s 00:30:33.464 06:56:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:33.464 06:56:37 -- common/autotest_common.sh@10 -- # set +x 00:30:33.464 ************************************ 00:30:33.464 END TEST nvmf_bdevperf 00:30:33.464 ************************************ 00:30:33.464 06:56:37 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:33.464 06:56:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:33.464 06:56:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:33.464 06:56:37 -- common/autotest_common.sh@10 -- # set +x 00:30:33.464 ************************************ 00:30:33.464 START TEST nvmf_target_disconnect 00:30:33.464 ************************************ 00:30:33.464 06:56:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:33.464 * Looking for test storage... 00:30:33.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:33.464 06:56:37 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:33.464 06:56:37 -- nvmf/common.sh@7 -- # uname -s 00:30:33.464 06:56:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:33.464 06:56:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:33.464 06:56:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:33.464 06:56:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:33.464 06:56:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:33.464 06:56:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:33.464 06:56:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:33.464 06:56:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:33.464 06:56:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:33.464 06:56:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:33.464 06:56:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:33.464 06:56:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:33.464 06:56:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:33.464 06:56:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:33.464 06:56:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:33.464 06:56:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:33.464 06:56:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:33.464 06:56:37 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:33.464 06:56:37 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:33.464 06:56:37 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:33.464 06:56:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.464 06:56:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.464 06:56:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.464 06:56:37 -- paths/export.sh@5 -- # export PATH 00:30:33.465 06:56:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.465 06:56:37 -- nvmf/common.sh@47 -- # : 0 00:30:33.465 06:56:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:33.465 06:56:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:33.465 06:56:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:33.465 06:56:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:33.465 06:56:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:33.465 06:56:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:33.465 06:56:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:33.465 06:56:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:33.465 06:56:37 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:33.465 06:56:37 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:33.465 06:56:37 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:33.465 06:56:37 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:30:33.465 06:56:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:30:33.465 06:56:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:33.465 06:56:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:30:33.465 06:56:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:30:33.465 06:56:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:30:33.465 06:56:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.465 06:56:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:33.465 06:56:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:33.465 06:56:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:30:33.465 06:56:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:30:33.465 06:56:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:33.465 06:56:37 -- common/autotest_common.sh@10 -- # set +x 00:30:35.366 06:56:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:35.366 06:56:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:35.366 06:56:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:35.366 06:56:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:35.366 06:56:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:35.366 06:56:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:35.366 06:56:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:35.366 06:56:39 -- nvmf/common.sh@295 -- # net_devs=() 00:30:35.366 06:56:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:35.366 06:56:39 -- nvmf/common.sh@296 -- # e810=() 00:30:35.366 06:56:39 -- nvmf/common.sh@296 -- # local -ga e810 00:30:35.366 06:56:39 -- nvmf/common.sh@297 -- # x722=() 00:30:35.366 06:56:39 -- nvmf/common.sh@297 -- # local -ga x722 00:30:35.366 06:56:39 -- nvmf/common.sh@298 -- # mlx=() 00:30:35.366 06:56:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:35.366 06:56:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:35.366 06:56:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:35.366 06:56:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:35.366 06:56:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:35.366 06:56:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:35.366 06:56:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:35.366 06:56:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:35.366 06:56:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:35.366 06:56:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:35.366 06:56:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:35.366 06:56:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:35.366 06:56:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:35.366 06:56:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:35.366 06:56:39 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:35.366 06:56:39 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:35.366 06:56:39 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:35.366 06:56:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:35.366 06:56:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:35.366 06:56:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:35.366 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:35.366 06:56:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:35.366 06:56:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:35.366 06:56:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.366 06:56:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.366 06:56:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:35.366 06:56:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:35.366 06:56:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:35.366 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:35.366 06:56:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:35.366 06:56:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:35.366 06:56:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.366 06:56:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.366 06:56:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:35.366 06:56:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:35.366 06:56:39 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:35.366 06:56:39 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:35.366 06:56:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:35.366 06:56:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.366 06:56:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:35.366 06:56:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.367 06:56:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:35.367 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:35.367 06:56:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.367 06:56:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:35.367 06:56:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.367 06:56:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:35.367 06:56:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.367 06:56:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:35.367 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:35.367 06:56:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.367 06:56:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:30:35.367 06:56:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:30:35.367 06:56:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:30:35.367 06:56:39 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:30:35.367 06:56:39 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:30:35.367 06:56:39 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:35.367 06:56:39 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:35.367 06:56:39 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:35.367 06:56:39 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:35.367 06:56:39 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:35.367 06:56:39 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:35.367 06:56:39 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:35.367 06:56:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:35.367 06:56:39 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:35.367 06:56:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:35.367 06:56:39 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:35.367 06:56:39 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:35.367 06:56:39 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:35.367 06:56:39 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:35.367 06:56:39 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:35.367 06:56:39 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:35.367 06:56:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:35.367 06:56:39 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:35.367 06:56:39 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:35.367 06:56:39 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:35.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:35.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:30:35.367 00:30:35.367 --- 10.0.0.2 ping statistics --- 00:30:35.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.367 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:30:35.367 06:56:39 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:35.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:35.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:30:35.367 00:30:35.367 --- 10.0.0.1 ping statistics --- 00:30:35.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.367 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:30:35.367 06:56:39 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:35.367 06:56:39 -- nvmf/common.sh@411 -- # return 0 00:30:35.367 06:56:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:30:35.367 06:56:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:35.367 06:56:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:30:35.367 06:56:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:30:35.367 06:56:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:35.367 06:56:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:30:35.367 06:56:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:30:35.367 06:56:39 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:35.367 06:56:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:35.367 06:56:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:35.367 06:56:39 -- common/autotest_common.sh@10 -- # set +x 00:30:35.367 ************************************ 00:30:35.367 START TEST nvmf_target_disconnect_tc1 00:30:35.367 ************************************ 00:30:35.367 06:56:39 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:30:35.367 06:56:39 -- host/target_disconnect.sh@32 -- # set +e 00:30:35.367 06:56:39 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:35.367 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.625 [2024-04-17 06:56:39.994455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.625 [2024-04-17 06:56:39.994696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.625 [2024-04-17 06:56:39.994723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96770 with addr=10.0.0.2, port=4420 00:30:35.625 [2024-04-17 06:56:39.994756] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:35.625 [2024-04-17 06:56:39.994777] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:35.625 [2024-04-17 06:56:39.994790] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:30:35.625 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:35.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:35.625 Initializing NVMe Controllers 00:30:35.625 06:56:40 -- host/target_disconnect.sh@33 -- # trap - ERR 00:30:35.625 06:56:40 -- host/target_disconnect.sh@33 -- # print_backtrace 00:30:35.625 06:56:40 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:30:35.625 06:56:40 -- common/autotest_common.sh@1139 -- # return 0 00:30:35.625 06:56:40 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:30:35.625 06:56:40 -- host/target_disconnect.sh@41 -- # set -e 00:30:35.625 00:30:35.625 real 0m0.088s 00:30:35.625 user 0m0.042s 00:30:35.625 sys 0m0.046s 00:30:35.625 06:56:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:35.625 06:56:40 -- common/autotest_common.sh@10 -- # set +x 00:30:35.625 ************************************ 00:30:35.625 END TEST nvmf_target_disconnect_tc1 00:30:35.625 ************************************ 00:30:35.625 06:56:40 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:35.625 06:56:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:35.625 06:56:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:35.625 06:56:40 -- common/autotest_common.sh@10 -- # set +x 00:30:35.625 ************************************ 00:30:35.625 START TEST nvmf_target_disconnect_tc2 00:30:35.625 ************************************ 00:30:35.625 06:56:40 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:30:35.625 06:56:40 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:30:35.625 06:56:40 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:35.625 06:56:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:30:35.625 06:56:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:35.625 06:56:40 -- common/autotest_common.sh@10 -- # set +x 00:30:35.625 06:56:40 -- nvmf/common.sh@470 -- # nvmfpid=123213 00:30:35.625 06:56:40 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:35.625 06:56:40 -- nvmf/common.sh@471 -- # waitforlisten 123213 00:30:35.625 06:56:40 -- common/autotest_common.sh@817 -- # '[' -z 123213 ']' 00:30:35.625 06:56:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.625 06:56:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:35.625 06:56:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:35.625 06:56:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:35.625 06:56:40 -- common/autotest_common.sh@10 -- # set +x 00:30:35.625 [2024-04-17 06:56:40.175605] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:30:35.625 [2024-04-17 06:56:40.175676] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:35.625 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.884 [2024-04-17 06:56:40.248881] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:35.884 [2024-04-17 06:56:40.344225] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:35.884 [2024-04-17 06:56:40.344295] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:35.884 [2024-04-17 06:56:40.344309] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:35.884 [2024-04-17 06:56:40.344321] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:35.884 [2024-04-17 06:56:40.344331] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:35.884 [2024-04-17 06:56:40.344424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:30:35.884 [2024-04-17 06:56:40.344463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:30:35.884 [2024-04-17 06:56:40.344558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:30:35.884 [2024-04-17 06:56:40.344561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:35.884 06:56:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:35.884 06:56:40 -- common/autotest_common.sh@850 -- # return 0 00:30:35.884 06:56:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:30:35.884 06:56:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:35.884 06:56:40 -- common/autotest_common.sh@10 -- # set +x 00:30:35.884 06:56:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:35.884 06:56:40 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:35.884 06:56:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:35.884 06:56:40 -- common/autotest_common.sh@10 -- # set +x 00:30:36.142 Malloc0 00:30:36.142 06:56:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:36.142 06:56:40 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:36.142 06:56:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:36.142 06:56:40 -- common/autotest_common.sh@10 -- # set +x 00:30:36.142 [2024-04-17 06:56:40.512756] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:36.142 06:56:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:36.142 06:56:40 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:36.142 06:56:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:36.142 06:56:40 -- common/autotest_common.sh@10 -- # set +x 00:30:36.142 06:56:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:36.142 06:56:40 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:36.142 06:56:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:36.142 06:56:40 -- common/autotest_common.sh@10 -- # set +x 00:30:36.143 06:56:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:36.143 06:56:40 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:36.143 06:56:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:36.143 06:56:40 -- common/autotest_common.sh@10 -- # set +x 00:30:36.143 [2024-04-17 06:56:40.541026] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:36.143 06:56:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:36.143 06:56:40 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:36.143 06:56:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:36.143 06:56:40 -- common/autotest_common.sh@10 -- # set +x 00:30:36.143 06:56:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:36.143 06:56:40 -- host/target_disconnect.sh@50 -- # reconnectpid=123303 00:30:36.143 06:56:40 -- host/target_disconnect.sh@52 -- # sleep 2 00:30:36.143 06:56:40 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:36.143 EAL: No free 2048 kB hugepages reported on node 1 00:30:38.048 06:56:42 -- host/target_disconnect.sh@53 -- # kill -9 123213 00:30:38.048 06:56:42 -- host/target_disconnect.sh@55 -- # sleep 2 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Write completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Write completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Write completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Write completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Write completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Write completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Write completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Write completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 [2024-04-17 06:56:42.568302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Write completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Write completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Write completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Write completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Write completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Write completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Write completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Write completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Write completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 [2024-04-17 06:56:42.568633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Write completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Write completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Write completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Write completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Write completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Write completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.048 Read completed with error (sct=0, sc=8) 00:30:38.048 starting I/O failed 00:30:38.049 Write completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Read completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Write completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Read completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Read completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 [2024-04-17 06:56:42.568937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.049 Read completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Read completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Read completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Read completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Read completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Read completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Read completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Write completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Write completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Write completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Read completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Write completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Read completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Read completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Read completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Write completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Read completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Write completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Read completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Read completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Read completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Write completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Read completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Read completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Read completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Read completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Read completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Write completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Write completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Write completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Write completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 Write completed with error (sct=0, sc=8) 00:30:38.049 starting I/O failed 00:30:38.049 [2024-04-17 06:56:42.569258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.049 [2024-04-17 06:56:42.569428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.569605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.569633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.049 qpair failed and we were unable to recover it. 00:30:38.049 [2024-04-17 06:56:42.569773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.569918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.569943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.049 qpair failed and we were unable to recover it. 00:30:38.049 [2024-04-17 06:56:42.570109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.570272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.570299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.049 qpair failed and we were unable to recover it. 00:30:38.049 [2024-04-17 06:56:42.570424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.570660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.570686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.049 qpair failed and we were unable to recover it. 00:30:38.049 [2024-04-17 06:56:42.570816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.571057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.571085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.049 qpair failed and we were unable to recover it. 00:30:38.049 [2024-04-17 06:56:42.571233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.571354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.571379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.049 qpair failed and we were unable to recover it. 00:30:38.049 [2024-04-17 06:56:42.571538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.571670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.571700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.049 qpair failed and we were unable to recover it. 00:30:38.049 [2024-04-17 06:56:42.571834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.572098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.572126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.049 qpair failed and we were unable to recover it. 00:30:38.049 [2024-04-17 06:56:42.572274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.572411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.572436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.049 qpair failed and we were unable to recover it. 00:30:38.049 [2024-04-17 06:56:42.572558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.572741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.572767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.049 qpair failed and we were unable to recover it. 00:30:38.049 [2024-04-17 06:56:42.572900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.573078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.573105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.049 qpair failed and we were unable to recover it. 00:30:38.049 [2024-04-17 06:56:42.573268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.573405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.573430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.049 qpair failed and we were unable to recover it. 00:30:38.049 [2024-04-17 06:56:42.573581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.573707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.573733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.049 qpair failed and we were unable to recover it. 00:30:38.049 [2024-04-17 06:56:42.573920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.574068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.574096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.049 qpair failed and we were unable to recover it. 00:30:38.049 [2024-04-17 06:56:42.574273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.574440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.574466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.049 qpair failed and we were unable to recover it. 00:30:38.049 [2024-04-17 06:56:42.574683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.574912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.574938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.049 qpair failed and we were unable to recover it. 00:30:38.049 [2024-04-17 06:56:42.575241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.575402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.575428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.049 qpair failed and we were unable to recover it. 00:30:38.049 [2024-04-17 06:56:42.575587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.575827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.575878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.049 qpair failed and we were unable to recover it. 00:30:38.049 [2024-04-17 06:56:42.576064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.576195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.049 [2024-04-17 06:56:42.576222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.049 qpair failed and we were unable to recover it. 00:30:38.050 [2024-04-17 06:56:42.576349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.576582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.576631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.050 qpair failed and we were unable to recover it. 00:30:38.050 [2024-04-17 06:56:42.576900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.577104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.577129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.050 qpair failed and we were unable to recover it. 00:30:38.050 [2024-04-17 06:56:42.577286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.577411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.577444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.050 qpair failed and we were unable to recover it. 00:30:38.050 [2024-04-17 06:56:42.577733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.577944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.577970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.050 qpair failed and we were unable to recover it. 00:30:38.050 [2024-04-17 06:56:42.578122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.578305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.578331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.050 qpair failed and we were unable to recover it. 00:30:38.050 [2024-04-17 06:56:42.578463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.578657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.578682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.050 qpair failed and we were unable to recover it. 00:30:38.050 [2024-04-17 06:56:42.578874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.579003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.579028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.050 qpair failed and we were unable to recover it. 00:30:38.050 [2024-04-17 06:56:42.579198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.579360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.579386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.050 qpair failed and we were unable to recover it. 00:30:38.050 [2024-04-17 06:56:42.579539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.579688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.579716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.050 qpair failed and we were unable to recover it. 00:30:38.050 [2024-04-17 06:56:42.579929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.580089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.580115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.050 qpair failed and we were unable to recover it. 00:30:38.050 [2024-04-17 06:56:42.580296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.580472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.580501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.050 qpair failed and we were unable to recover it. 00:30:38.050 [2024-04-17 06:56:42.580728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.581005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.581029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.050 qpair failed and we were unable to recover it. 00:30:38.050 [2024-04-17 06:56:42.581192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.581350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.581375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.050 qpair failed and we were unable to recover it. 00:30:38.050 [2024-04-17 06:56:42.581561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.581778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.581829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.050 qpair failed and we were unable to recover it. 00:30:38.050 [2024-04-17 06:56:42.582018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.582146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.582171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.050 qpair failed and we were unable to recover it. 00:30:38.050 [2024-04-17 06:56:42.582341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.582503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.582529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.050 qpair failed and we were unable to recover it. 00:30:38.050 [2024-04-17 06:56:42.582659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.582909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.582949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.050 qpair failed and we were unable to recover it. 00:30:38.050 [2024-04-17 06:56:42.583154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.583327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.583353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.050 qpair failed and we were unable to recover it. 00:30:38.050 [2024-04-17 06:56:42.583501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.583660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.583687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.050 qpair failed and we were unable to recover it. 00:30:38.050 [2024-04-17 06:56:42.583824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.584000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.584028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.050 qpair failed and we were unable to recover it. 00:30:38.050 [2024-04-17 06:56:42.584185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.584387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.584412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.050 qpair failed and we were unable to recover it. 00:30:38.050 [2024-04-17 06:56:42.584586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.584779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.584804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.050 qpair failed and we were unable to recover it. 00:30:38.050 [2024-04-17 06:56:42.585023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.585187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.585227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.050 qpair failed and we were unable to recover it. 00:30:38.050 [2024-04-17 06:56:42.585403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.585575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.585600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.050 qpair failed and we were unable to recover it. 00:30:38.050 [2024-04-17 06:56:42.585787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.585949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.585975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.050 qpair failed and we were unable to recover it. 00:30:38.050 [2024-04-17 06:56:42.586108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.586241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.586266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.050 qpair failed and we were unable to recover it. 00:30:38.050 [2024-04-17 06:56:42.586457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.586639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.586664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.050 qpair failed and we were unable to recover it. 00:30:38.050 [2024-04-17 06:56:42.586849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.586970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.587000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.050 qpair failed and we were unable to recover it. 00:30:38.050 [2024-04-17 06:56:42.587202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.587333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.050 [2024-04-17 06:56:42.587359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.050 qpair failed and we were unable to recover it. 00:30:38.051 [2024-04-17 06:56:42.587620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.587815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.587840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.051 qpair failed and we were unable to recover it. 00:30:38.051 [2024-04-17 06:56:42.588025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.588191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.588220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.051 qpair failed and we were unable to recover it. 00:30:38.051 [2024-04-17 06:56:42.588417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.588571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.588599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.051 qpair failed and we were unable to recover it. 00:30:38.051 [2024-04-17 06:56:42.588774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.588954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.588982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.051 qpair failed and we were unable to recover it. 00:30:38.051 [2024-04-17 06:56:42.589159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.589332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.589357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.051 qpair failed and we were unable to recover it. 00:30:38.051 [2024-04-17 06:56:42.589522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.589686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.589712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.051 qpair failed and we were unable to recover it. 00:30:38.051 [2024-04-17 06:56:42.589889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.590057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.590084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.051 qpair failed and we were unable to recover it. 00:30:38.051 [2024-04-17 06:56:42.590245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.590496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.590538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.051 qpair failed and we were unable to recover it. 00:30:38.051 [2024-04-17 06:56:42.590717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.590848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.590891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.051 qpair failed and we were unable to recover it. 00:30:38.051 [2024-04-17 06:56:42.591065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.591252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.591278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.051 qpair failed and we were unable to recover it. 00:30:38.051 [2024-04-17 06:56:42.591406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.591565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.591592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.051 qpair failed and we were unable to recover it. 00:30:38.051 [2024-04-17 06:56:42.591766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.591922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.591962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.051 qpair failed and we were unable to recover it. 00:30:38.051 [2024-04-17 06:56:42.592161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.592318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.592343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.051 qpair failed and we were unable to recover it. 00:30:38.051 [2024-04-17 06:56:42.592504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.592622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.592648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.051 qpair failed and we were unable to recover it. 00:30:38.051 [2024-04-17 06:56:42.592827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.593039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.593066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.051 qpair failed and we were unable to recover it. 00:30:38.051 [2024-04-17 06:56:42.593189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.593347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.593373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.051 qpair failed and we were unable to recover it. 00:30:38.051 [2024-04-17 06:56:42.593527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.593677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.593703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.051 qpair failed and we were unable to recover it. 00:30:38.051 [2024-04-17 06:56:42.593890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.594076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.594102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.051 qpair failed and we were unable to recover it. 00:30:38.051 [2024-04-17 06:56:42.594282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.594484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.594510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.051 qpair failed and we were unable to recover it. 00:30:38.051 [2024-04-17 06:56:42.594671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.594822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.594848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.051 qpair failed and we were unable to recover it. 00:30:38.051 [2024-04-17 06:56:42.595034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.595189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.595215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.051 qpair failed and we were unable to recover it. 00:30:38.051 [2024-04-17 06:56:42.595377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.595537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.595563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.051 qpair failed and we were unable to recover it. 00:30:38.051 [2024-04-17 06:56:42.595765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.595889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.595915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.051 qpair failed and we were unable to recover it. 00:30:38.051 [2024-04-17 06:56:42.596073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.596232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.596259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.051 qpair failed and we were unable to recover it. 00:30:38.051 [2024-04-17 06:56:42.596407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.596589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.596614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.051 qpair failed and we were unable to recover it. 00:30:38.051 [2024-04-17 06:56:42.596763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.596948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.596974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.051 qpair failed and we were unable to recover it. 00:30:38.051 [2024-04-17 06:56:42.597192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.597333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.597362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.051 qpair failed and we were unable to recover it. 00:30:38.051 [2024-04-17 06:56:42.597548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.597680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.597705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.051 qpair failed and we were unable to recover it. 00:30:38.051 [2024-04-17 06:56:42.597888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.598045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.051 [2024-04-17 06:56:42.598072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.051 qpair failed and we were unable to recover it. 00:30:38.052 [2024-04-17 06:56:42.598225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.598381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.598407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.052 qpair failed and we were unable to recover it. 00:30:38.052 [2024-04-17 06:56:42.598558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.598718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.598744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.052 qpair failed and we were unable to recover it. 00:30:38.052 [2024-04-17 06:56:42.598927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.599125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.599154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.052 qpair failed and we were unable to recover it. 00:30:38.052 [2024-04-17 06:56:42.599348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.599484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.599510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.052 qpair failed and we were unable to recover it. 00:30:38.052 [2024-04-17 06:56:42.599667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.599837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.599863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.052 qpair failed and we were unable to recover it. 00:30:38.052 [2024-04-17 06:56:42.600046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.600184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.600211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.052 qpair failed and we were unable to recover it. 00:30:38.052 [2024-04-17 06:56:42.600361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.600497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.600523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.052 qpair failed and we were unable to recover it. 00:30:38.052 [2024-04-17 06:56:42.600706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.600899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.600965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.052 qpair failed and we were unable to recover it. 00:30:38.052 [2024-04-17 06:56:42.601136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.601346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.601373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.052 qpair failed and we were unable to recover it. 00:30:38.052 [2024-04-17 06:56:42.601502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.601652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.601678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.052 qpair failed and we were unable to recover it. 00:30:38.052 [2024-04-17 06:56:42.601847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.602028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.602057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.052 qpair failed and we were unable to recover it. 00:30:38.052 [2024-04-17 06:56:42.602212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.602367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.602408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.052 qpair failed and we were unable to recover it. 00:30:38.052 [2024-04-17 06:56:42.602590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.602778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.602803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.052 qpair failed and we were unable to recover it. 00:30:38.052 [2024-04-17 06:56:42.603008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.603213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.603240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.052 qpair failed and we were unable to recover it. 00:30:38.052 [2024-04-17 06:56:42.603422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.603606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.603632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.052 qpair failed and we were unable to recover it. 00:30:38.052 [2024-04-17 06:56:42.603770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.603931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.603957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.052 qpair failed and we were unable to recover it. 00:30:38.052 [2024-04-17 06:56:42.604139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.604297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.604324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.052 qpair failed and we were unable to recover it. 00:30:38.052 [2024-04-17 06:56:42.604487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.604667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.604692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.052 qpair failed and we were unable to recover it. 00:30:38.052 [2024-04-17 06:56:42.604869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.605040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.605066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.052 qpair failed and we were unable to recover it. 00:30:38.052 [2024-04-17 06:56:42.605222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.605377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.605419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.052 qpair failed and we were unable to recover it. 00:30:38.052 [2024-04-17 06:56:42.605559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.605717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.605743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.052 qpair failed and we were unable to recover it. 00:30:38.052 [2024-04-17 06:56:42.605903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.606106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.606132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.052 qpair failed and we were unable to recover it. 00:30:38.052 [2024-04-17 06:56:42.606287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.606446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.052 [2024-04-17 06:56:42.606472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.052 qpair failed and we were unable to recover it. 00:30:38.052 [2024-04-17 06:56:42.606633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.606829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.606857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.053 qpair failed and we were unable to recover it. 00:30:38.053 [2024-04-17 06:56:42.607065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.607215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.607241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.053 qpair failed and we were unable to recover it. 00:30:38.053 [2024-04-17 06:56:42.607431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.607635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.607664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.053 qpair failed and we were unable to recover it. 00:30:38.053 [2024-04-17 06:56:42.607813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.607937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.607965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.053 qpair failed and we were unable to recover it. 00:30:38.053 [2024-04-17 06:56:42.608186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.608339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.608384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.053 qpair failed and we were unable to recover it. 00:30:38.053 [2024-04-17 06:56:42.608561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.608743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.608769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.053 qpair failed and we were unable to recover it. 00:30:38.053 [2024-04-17 06:56:42.608938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.609088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.609114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.053 qpair failed and we were unable to recover it. 00:30:38.053 [2024-04-17 06:56:42.609258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.609450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.609485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.053 qpair failed and we were unable to recover it. 00:30:38.053 [2024-04-17 06:56:42.609670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.609869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.609898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.053 qpair failed and we were unable to recover it. 00:30:38.053 [2024-04-17 06:56:42.610066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.610192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.610236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.053 qpair failed and we were unable to recover it. 00:30:38.053 [2024-04-17 06:56:42.610404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.610663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.610721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.053 qpair failed and we were unable to recover it. 00:30:38.053 [2024-04-17 06:56:42.610887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.611083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.611111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.053 qpair failed and we were unable to recover it. 00:30:38.053 [2024-04-17 06:56:42.611302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.611458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.611483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.053 qpair failed and we were unable to recover it. 00:30:38.053 [2024-04-17 06:56:42.611633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.611756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.611782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.053 qpair failed and we were unable to recover it. 00:30:38.053 [2024-04-17 06:56:42.611941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.612095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.612121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.053 qpair failed and we were unable to recover it. 00:30:38.053 [2024-04-17 06:56:42.612279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.612408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.612434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.053 qpair failed and we were unable to recover it. 00:30:38.053 [2024-04-17 06:56:42.612644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.612818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.612844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.053 qpair failed and we were unable to recover it. 00:30:38.053 [2024-04-17 06:56:42.613000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.613203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.613231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.053 qpair failed and we were unable to recover it. 00:30:38.053 [2024-04-17 06:56:42.613390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.613603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.613654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.053 qpair failed and we were unable to recover it. 00:30:38.053 [2024-04-17 06:56:42.613789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.613924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.613952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.053 qpair failed and we were unable to recover it. 00:30:38.053 [2024-04-17 06:56:42.614162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.614306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.614348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.053 qpair failed and we were unable to recover it. 00:30:38.053 [2024-04-17 06:56:42.614529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.614696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.614722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.053 qpair failed and we were unable to recover it. 00:30:38.053 [2024-04-17 06:56:42.614899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.615074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.615105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.053 qpair failed and we were unable to recover it. 00:30:38.053 [2024-04-17 06:56:42.615320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.615445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.615471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.053 qpair failed and we were unable to recover it. 00:30:38.053 [2024-04-17 06:56:42.615680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.615817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.615845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.053 qpair failed and we were unable to recover it. 00:30:38.053 [2024-04-17 06:56:42.616006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.616201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.616228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.053 qpair failed and we were unable to recover it. 00:30:38.053 [2024-04-17 06:56:42.616425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.616652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.616710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.053 qpair failed and we were unable to recover it. 00:30:38.053 [2024-04-17 06:56:42.616861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.617046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.617090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.053 qpair failed and we were unable to recover it. 00:30:38.053 [2024-04-17 06:56:42.617285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.617467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.053 [2024-04-17 06:56:42.617494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.053 qpair failed and we were unable to recover it. 00:30:38.053 [2024-04-17 06:56:42.617644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.617802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.617843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.054 qpair failed and we were unable to recover it. 00:30:38.054 [2024-04-17 06:56:42.617999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.618189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.618232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.054 qpair failed and we were unable to recover it. 00:30:38.054 [2024-04-17 06:56:42.618404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.618563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.618588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.054 qpair failed and we were unable to recover it. 00:30:38.054 [2024-04-17 06:56:42.618743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.618955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.618983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.054 qpair failed and we were unable to recover it. 00:30:38.054 [2024-04-17 06:56:42.619172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.619328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.619354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.054 qpair failed and we were unable to recover it. 00:30:38.054 [2024-04-17 06:56:42.619537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.619688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.619713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.054 qpair failed and we were unable to recover it. 00:30:38.054 [2024-04-17 06:56:42.619872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.620009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.620039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.054 qpair failed and we were unable to recover it. 00:30:38.054 [2024-04-17 06:56:42.620229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.620389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.620431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.054 qpair failed and we were unable to recover it. 00:30:38.054 [2024-04-17 06:56:42.620611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.620781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.620806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.054 qpair failed and we were unable to recover it. 00:30:38.054 [2024-04-17 06:56:42.620961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.621141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.621169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.054 qpair failed and we were unable to recover it. 00:30:38.054 [2024-04-17 06:56:42.621352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.621502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.621527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.054 qpair failed and we were unable to recover it. 00:30:38.054 [2024-04-17 06:56:42.621712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.621885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.621913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.054 qpair failed and we were unable to recover it. 00:30:38.054 [2024-04-17 06:56:42.622059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.622258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.622287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.054 qpair failed and we were unable to recover it. 00:30:38.054 [2024-04-17 06:56:42.622438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.622592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.622617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.054 qpair failed and we were unable to recover it. 00:30:38.054 [2024-04-17 06:56:42.622767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.622939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.622968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.054 qpair failed and we were unable to recover it. 00:30:38.054 [2024-04-17 06:56:42.623136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.623337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.623394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.054 qpair failed and we were unable to recover it. 00:30:38.054 [2024-04-17 06:56:42.623581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.623709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.623735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.054 qpair failed and we were unable to recover it. 00:30:38.054 [2024-04-17 06:56:42.623883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.624029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.624054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.054 qpair failed and we were unable to recover it. 00:30:38.054 [2024-04-17 06:56:42.624181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.624306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.624331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.054 qpair failed and we were unable to recover it. 00:30:38.054 [2024-04-17 06:56:42.624487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.624653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.624682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.054 qpair failed and we were unable to recover it. 00:30:38.054 [2024-04-17 06:56:42.624856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.624972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.624998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.054 qpair failed and we were unable to recover it. 00:30:38.054 [2024-04-17 06:56:42.625152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.625307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.625333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.054 qpair failed and we were unable to recover it. 00:30:38.054 [2024-04-17 06:56:42.625517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.625743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.625772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.054 qpair failed and we were unable to recover it. 00:30:38.054 [2024-04-17 06:56:42.625938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.626079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.626107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.054 qpair failed and we were unable to recover it. 00:30:38.054 [2024-04-17 06:56:42.626265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.626437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.626465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.054 qpair failed and we were unable to recover it. 00:30:38.054 [2024-04-17 06:56:42.626641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.626795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.626838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.054 qpair failed and we were unable to recover it. 00:30:38.054 [2024-04-17 06:56:42.626977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.627185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.627211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.054 qpair failed and we were unable to recover it. 00:30:38.054 [2024-04-17 06:56:42.627366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.627546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.627572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.054 qpair failed and we were unable to recover it. 00:30:38.054 [2024-04-17 06:56:42.627699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.627851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.054 [2024-04-17 06:56:42.627877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.054 qpair failed and we were unable to recover it. 00:30:38.054 [2024-04-17 06:56:42.628030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.628226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.628260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.055 qpair failed and we were unable to recover it. 00:30:38.055 [2024-04-17 06:56:42.628394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.628532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.628560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.055 qpair failed and we were unable to recover it. 00:30:38.055 [2024-04-17 06:56:42.628730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.628910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.628954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.055 qpair failed and we were unable to recover it. 00:30:38.055 [2024-04-17 06:56:42.629114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.629246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.629295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.055 qpair failed and we were unable to recover it. 00:30:38.055 [2024-04-17 06:56:42.629505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.629662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.629687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.055 qpair failed and we were unable to recover it. 00:30:38.055 [2024-04-17 06:56:42.629845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.629972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.629997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.055 qpair failed and we were unable to recover it. 00:30:38.055 [2024-04-17 06:56:42.630151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.630292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.630321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.055 qpair failed and we were unable to recover it. 00:30:38.055 [2024-04-17 06:56:42.630495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.630675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.630700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.055 qpair failed and we were unable to recover it. 00:30:38.055 [2024-04-17 06:56:42.630849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.631000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.631042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.055 qpair failed and we were unable to recover it. 00:30:38.055 [2024-04-17 06:56:42.631221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.631405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.631431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.055 qpair failed and we were unable to recover it. 00:30:38.055 [2024-04-17 06:56:42.631621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.631781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.631822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.055 qpair failed and we were unable to recover it. 00:30:38.055 [2024-04-17 06:56:42.632010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.632197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.632240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.055 qpair failed and we were unable to recover it. 00:30:38.055 [2024-04-17 06:56:42.632375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.632534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.632563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.055 qpair failed and we were unable to recover it. 00:30:38.055 [2024-04-17 06:56:42.632772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.632932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.632972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.055 qpair failed and we were unable to recover it. 00:30:38.055 [2024-04-17 06:56:42.633183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.633343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.633386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.055 qpair failed and we were unable to recover it. 00:30:38.055 [2024-04-17 06:56:42.633576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.633756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.633781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.055 qpair failed and we were unable to recover it. 00:30:38.055 [2024-04-17 06:56:42.633957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.634183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.634210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.055 qpair failed and we were unable to recover it. 00:30:38.055 [2024-04-17 06:56:42.634336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.634506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.634532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.055 qpair failed and we were unable to recover it. 00:30:38.055 [2024-04-17 06:56:42.634714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.634907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.634936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.055 qpair failed and we were unable to recover it. 00:30:38.055 [2024-04-17 06:56:42.635111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.635305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.635332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.055 qpair failed and we were unable to recover it. 00:30:38.055 [2024-04-17 06:56:42.635467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.635591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.635617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.055 qpair failed and we were unable to recover it. 00:30:38.055 [2024-04-17 06:56:42.635742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.635862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.635889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.055 qpair failed and we were unable to recover it. 00:30:38.055 [2024-04-17 06:56:42.636030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.636189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.636215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.055 qpair failed and we were unable to recover it. 00:30:38.055 [2024-04-17 06:56:42.636366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.636526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.636567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.055 qpair failed and we were unable to recover it. 00:30:38.055 [2024-04-17 06:56:42.636722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.636846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.636872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.055 qpair failed and we were unable to recover it. 00:30:38.055 [2024-04-17 06:56:42.637010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.637274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.637303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.055 qpair failed and we were unable to recover it. 00:30:38.055 [2024-04-17 06:56:42.637437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.637573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.637598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.055 qpair failed and we were unable to recover it. 00:30:38.055 [2024-04-17 06:56:42.637759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.637917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.637945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.055 qpair failed and we were unable to recover it. 00:30:38.055 [2024-04-17 06:56:42.638110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.055 [2024-04-17 06:56:42.638261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.638288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.056 qpair failed and we were unable to recover it. 00:30:38.056 [2024-04-17 06:56:42.638469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.638643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.638684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.056 qpair failed and we were unable to recover it. 00:30:38.056 [2024-04-17 06:56:42.638855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.639040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.639067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.056 qpair failed and we were unable to recover it. 00:30:38.056 [2024-04-17 06:56:42.639204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.639361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.639387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.056 qpair failed and we were unable to recover it. 00:30:38.056 [2024-04-17 06:56:42.639570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.639717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.639742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.056 qpair failed and we were unable to recover it. 00:30:38.056 [2024-04-17 06:56:42.639870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.640027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.640055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.056 qpair failed and we were unable to recover it. 00:30:38.056 [2024-04-17 06:56:42.640248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.640413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.640439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.056 qpair failed and we were unable to recover it. 00:30:38.056 [2024-04-17 06:56:42.640597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.640783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.640810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.056 qpair failed and we were unable to recover it. 00:30:38.056 [2024-04-17 06:56:42.641030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.641191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.641219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.056 qpair failed and we were unable to recover it. 00:30:38.056 [2024-04-17 06:56:42.641384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.641593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.641618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.056 qpair failed and we were unable to recover it. 00:30:38.056 [2024-04-17 06:56:42.641787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.641942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.641968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.056 qpair failed and we were unable to recover it. 00:30:38.056 [2024-04-17 06:56:42.642189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.642375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.642401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.056 qpair failed and we were unable to recover it. 00:30:38.056 [2024-04-17 06:56:42.642682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.642846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.642875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.056 qpair failed and we were unable to recover it. 00:30:38.056 [2024-04-17 06:56:42.643065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.643269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.643296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.056 qpair failed and we were unable to recover it. 00:30:38.056 [2024-04-17 06:56:42.643424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.643584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.643610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.056 qpair failed and we were unable to recover it. 00:30:38.056 [2024-04-17 06:56:42.643757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.643938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.643964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.056 qpair failed and we were unable to recover it. 00:30:38.056 [2024-04-17 06:56:42.644121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.644299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.644325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.056 qpair failed and we were unable to recover it. 00:30:38.056 [2024-04-17 06:56:42.644514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.644713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.644739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.056 qpair failed and we were unable to recover it. 00:30:38.056 [2024-04-17 06:56:42.644861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.645019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.645044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.056 qpair failed and we were unable to recover it. 00:30:38.056 [2024-04-17 06:56:42.645200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.645325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.645352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.056 qpair failed and we were unable to recover it. 00:30:38.056 [2024-04-17 06:56:42.645501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.645679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.645705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.056 qpair failed and we were unable to recover it. 00:30:38.056 [2024-04-17 06:56:42.645833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.646018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.646049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.056 qpair failed and we were unable to recover it. 00:30:38.056 [2024-04-17 06:56:42.646202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.646365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.646392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.056 qpair failed and we were unable to recover it. 00:30:38.056 [2024-04-17 06:56:42.646573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.646736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.646766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.056 qpair failed and we were unable to recover it. 00:30:38.056 [2024-04-17 06:56:42.646913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.647101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.647129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.056 qpair failed and we were unable to recover it. 00:30:38.056 [2024-04-17 06:56:42.647343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.647528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.647554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.056 qpair failed and we were unable to recover it. 00:30:38.056 [2024-04-17 06:56:42.647704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.647859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.647884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.056 qpair failed and we were unable to recover it. 00:30:38.056 [2024-04-17 06:56:42.648070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.648251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.648290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.056 qpair failed and we were unable to recover it. 00:30:38.056 [2024-04-17 06:56:42.648480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.648645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.648672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.056 qpair failed and we were unable to recover it. 00:30:38.056 [2024-04-17 06:56:42.648812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.056 [2024-04-17 06:56:42.648969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.057 [2024-04-17 06:56:42.648998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.057 qpair failed and we were unable to recover it. 00:30:38.057 [2024-04-17 06:56:42.649156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.057 [2024-04-17 06:56:42.649304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.057 [2024-04-17 06:56:42.649333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.057 qpair failed and we were unable to recover it. 00:30:38.057 [2024-04-17 06:56:42.649491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.057 [2024-04-17 06:56:42.649614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.057 [2024-04-17 06:56:42.649640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.057 qpair failed and we were unable to recover it. 00:30:38.057 [2024-04-17 06:56:42.649793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.057 [2024-04-17 06:56:42.649994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.057 [2024-04-17 06:56:42.650035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.057 qpair failed and we were unable to recover it. 00:30:38.057 [2024-04-17 06:56:42.650245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.057 [2024-04-17 06:56:42.650390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.057 [2024-04-17 06:56:42.650417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.332 qpair failed and we were unable to recover it. 00:30:38.332 [2024-04-17 06:56:42.650581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.332 [2024-04-17 06:56:42.650743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.332 [2024-04-17 06:56:42.650778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.332 qpair failed and we were unable to recover it. 00:30:38.332 [2024-04-17 06:56:42.650939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.332 [2024-04-17 06:56:42.651149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.332 [2024-04-17 06:56:42.651195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.332 qpair failed and we were unable to recover it. 00:30:38.332 [2024-04-17 06:56:42.651382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.332 [2024-04-17 06:56:42.651567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.332 [2024-04-17 06:56:42.651593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.332 qpair failed and we were unable to recover it. 00:30:38.332 [2024-04-17 06:56:42.651773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.332 [2024-04-17 06:56:42.651963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.332 [2024-04-17 06:56:42.651990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.332 qpair failed and we were unable to recover it. 00:30:38.332 [2024-04-17 06:56:42.652184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.332 [2024-04-17 06:56:42.652322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.332 [2024-04-17 06:56:42.652354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.332 qpair failed and we were unable to recover it. 00:30:38.332 [2024-04-17 06:56:42.652543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.332 [2024-04-17 06:56:42.652727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.332 [2024-04-17 06:56:42.652753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.333 qpair failed and we were unable to recover it. 00:30:38.333 [2024-04-17 06:56:42.652917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.653041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.653084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.333 qpair failed and we were unable to recover it. 00:30:38.333 [2024-04-17 06:56:42.653287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.653552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.653604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.333 qpair failed and we were unable to recover it. 00:30:38.333 [2024-04-17 06:56:42.653812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.653968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.654001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.333 qpair failed and we were unable to recover it. 00:30:38.333 [2024-04-17 06:56:42.654172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.654388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.654417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.333 qpair failed and we were unable to recover it. 00:30:38.333 [2024-04-17 06:56:42.654559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.654733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.654762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.333 qpair failed and we were unable to recover it. 00:30:38.333 [2024-04-17 06:56:42.654934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.655133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.655161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.333 qpair failed and we were unable to recover it. 00:30:38.333 [2024-04-17 06:56:42.655334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.655461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.655487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.333 qpair failed and we were unable to recover it. 00:30:38.333 [2024-04-17 06:56:42.655703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.655876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.655901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.333 qpair failed and we were unable to recover it. 00:30:38.333 [2024-04-17 06:56:42.656024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.656145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.656171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.333 qpair failed and we were unable to recover it. 00:30:38.333 [2024-04-17 06:56:42.656306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.656506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.656535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.333 qpair failed and we were unable to recover it. 00:30:38.333 [2024-04-17 06:56:42.656733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.656881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.656910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.333 qpair failed and we were unable to recover it. 00:30:38.333 [2024-04-17 06:56:42.657079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.657280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.657310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.333 qpair failed and we were unable to recover it. 00:30:38.333 [2024-04-17 06:56:42.657512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.657694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.657720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.333 qpair failed and we were unable to recover it. 00:30:38.333 [2024-04-17 06:56:42.657930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.658096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.658124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.333 qpair failed and we were unable to recover it. 00:30:38.333 [2024-04-17 06:56:42.658323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.658494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.658523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.333 qpair failed and we were unable to recover it. 00:30:38.333 [2024-04-17 06:56:42.658694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.658818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.658843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.333 qpair failed and we were unable to recover it. 00:30:38.333 [2024-04-17 06:56:42.659027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.659155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.659187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.333 qpair failed and we were unable to recover it. 00:30:38.333 [2024-04-17 06:56:42.659308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.659454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.659496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.333 qpair failed and we were unable to recover it. 00:30:38.333 [2024-04-17 06:56:42.659672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.659831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.659857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.333 qpair failed and we were unable to recover it. 00:30:38.333 [2024-04-17 06:56:42.660039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.660204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.660233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.333 qpair failed and we were unable to recover it. 00:30:38.333 [2024-04-17 06:56:42.660435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.660598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.660626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.333 qpair failed and we were unable to recover it. 00:30:38.333 [2024-04-17 06:56:42.660830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.661003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.661032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.333 qpair failed and we were unable to recover it. 00:30:38.333 [2024-04-17 06:56:42.661209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.661347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.661375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.333 qpair failed and we were unable to recover it. 00:30:38.333 [2024-04-17 06:56:42.661550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.661710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.661738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.333 qpair failed and we were unable to recover it. 00:30:38.333 [2024-04-17 06:56:42.661868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.662045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.662087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.333 qpair failed and we were unable to recover it. 00:30:38.333 [2024-04-17 06:56:42.662227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.662422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.662447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.333 qpair failed and we were unable to recover it. 00:30:38.333 [2024-04-17 06:56:42.662603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.333 [2024-04-17 06:56:42.662754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.662779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.334 qpair failed and we were unable to recover it. 00:30:38.334 [2024-04-17 06:56:42.662901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.663082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.663107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.334 qpair failed and we were unable to recover it. 00:30:38.334 [2024-04-17 06:56:42.663274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.663431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.663473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.334 qpair failed and we were unable to recover it. 00:30:38.334 [2024-04-17 06:56:42.663653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.663777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.663803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.334 qpair failed and we were unable to recover it. 00:30:38.334 [2024-04-17 06:56:42.663960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.664131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.664160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.334 qpair failed and we were unable to recover it. 00:30:38.334 [2024-04-17 06:56:42.664368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.664539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.664569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.334 qpair failed and we were unable to recover it. 00:30:38.334 [2024-04-17 06:56:42.664771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.664969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.664998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.334 qpair failed and we were unable to recover it. 00:30:38.334 [2024-04-17 06:56:42.665148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.665328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.665356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.334 qpair failed and we were unable to recover it. 00:30:38.334 [2024-04-17 06:56:42.665514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.665713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.665765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.334 qpair failed and we were unable to recover it. 00:30:38.334 [2024-04-17 06:56:42.665953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.666154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.666191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.334 qpair failed and we were unable to recover it. 00:30:38.334 [2024-04-17 06:56:42.666381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.666626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.666691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.334 qpair failed and we were unable to recover it. 00:30:38.334 [2024-04-17 06:56:42.666908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.667095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.667124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.334 qpair failed and we were unable to recover it. 00:30:38.334 [2024-04-17 06:56:42.667283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.667427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.667454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.334 qpair failed and we were unable to recover it. 00:30:38.334 [2024-04-17 06:56:42.667611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.667737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.667764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.334 qpair failed and we were unable to recover it. 00:30:38.334 [2024-04-17 06:56:42.667929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.668102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.668131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.334 qpair failed and we were unable to recover it. 00:30:38.334 [2024-04-17 06:56:42.668303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.668465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.668507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.334 qpair failed and we were unable to recover it. 00:30:38.334 [2024-04-17 06:56:42.668685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.668921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.668946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.334 qpair failed and we were unable to recover it. 00:30:38.334 [2024-04-17 06:56:42.669143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.669307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.669335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.334 qpair failed and we were unable to recover it. 00:30:38.334 [2024-04-17 06:56:42.669462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.669602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.669635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.334 qpair failed and we were unable to recover it. 00:30:38.334 [2024-04-17 06:56:42.669848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.669983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.670012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.334 qpair failed and we were unable to recover it. 00:30:38.334 [2024-04-17 06:56:42.670217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.670377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.670404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.334 qpair failed and we were unable to recover it. 00:30:38.334 [2024-04-17 06:56:42.670579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.670712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.670739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.334 qpair failed and we were unable to recover it. 00:30:38.334 [2024-04-17 06:56:42.670928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.671061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.671088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.334 qpair failed and we were unable to recover it. 00:30:38.334 [2024-04-17 06:56:42.671315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.671502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.671528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.334 qpair failed and we were unable to recover it. 00:30:38.334 [2024-04-17 06:56:42.671714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.671889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.671917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.334 qpair failed and we were unable to recover it. 00:30:38.334 [2024-04-17 06:56:42.672097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.672308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.672339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.334 qpair failed and we were unable to recover it. 00:30:38.334 [2024-04-17 06:56:42.672516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.672716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.334 [2024-04-17 06:56:42.672745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.334 qpair failed and we were unable to recover it. 00:30:38.334 [2024-04-17 06:56:42.672945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.673107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.673136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.335 qpair failed and we were unable to recover it. 00:30:38.335 [2024-04-17 06:56:42.673362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.673488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.673514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.335 qpair failed and we were unable to recover it. 00:30:38.335 [2024-04-17 06:56:42.673642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.673818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.673847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.335 qpair failed and we were unable to recover it. 00:30:38.335 [2024-04-17 06:56:42.674057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.674207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.674234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.335 qpair failed and we were unable to recover it. 00:30:38.335 [2024-04-17 06:56:42.674397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.674556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.674582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.335 qpair failed and we were unable to recover it. 00:30:38.335 [2024-04-17 06:56:42.674763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.674939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.674964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.335 qpair failed and we were unable to recover it. 00:30:38.335 [2024-04-17 06:56:42.675099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.675248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.675275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.335 qpair failed and we were unable to recover it. 00:30:38.335 [2024-04-17 06:56:42.675457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.675675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.675703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.335 qpair failed and we were unable to recover it. 00:30:38.335 [2024-04-17 06:56:42.675884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.676065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.676091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.335 qpair failed and we were unable to recover it. 00:30:38.335 [2024-04-17 06:56:42.676306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.676531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.676593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.335 qpair failed and we were unable to recover it. 00:30:38.335 [2024-04-17 06:56:42.676799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.676961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.676987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.335 qpair failed and we were unable to recover it. 00:30:38.335 [2024-04-17 06:56:42.677110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.677271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.677298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.335 qpair failed and we were unable to recover it. 00:30:38.335 [2024-04-17 06:56:42.677495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.677641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.677667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.335 qpair failed and we were unable to recover it. 00:30:38.335 [2024-04-17 06:56:42.677798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.677951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.677977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.335 qpair failed and we were unable to recover it. 00:30:38.335 [2024-04-17 06:56:42.678192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.678334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.678363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.335 qpair failed and we were unable to recover it. 00:30:38.335 [2024-04-17 06:56:42.678538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.678671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.678700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.335 qpair failed and we were unable to recover it. 00:30:38.335 [2024-04-17 06:56:42.678898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.679022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.679065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.335 qpair failed and we were unable to recover it. 00:30:38.335 [2024-04-17 06:56:42.679275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.679406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.679432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.335 qpair failed and we were unable to recover it. 00:30:38.335 [2024-04-17 06:56:42.679593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.679762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.679790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.335 qpair failed and we were unable to recover it. 00:30:38.335 [2024-04-17 06:56:42.679942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.680065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.680091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.335 qpair failed and we were unable to recover it. 00:30:38.335 [2024-04-17 06:56:42.680273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.680406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.680435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.335 qpair failed and we were unable to recover it. 00:30:38.335 [2024-04-17 06:56:42.680601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.680736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.680764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.335 qpair failed and we were unable to recover it. 00:30:38.335 [2024-04-17 06:56:42.680910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.681107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.681133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.335 qpair failed and we were unable to recover it. 00:30:38.335 [2024-04-17 06:56:42.681276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.681432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.681477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.335 qpair failed and we were unable to recover it. 00:30:38.335 [2024-04-17 06:56:42.681618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.681830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.681856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.335 qpair failed and we were unable to recover it. 00:30:38.335 [2024-04-17 06:56:42.682004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.682161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.682202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.335 qpair failed and we were unable to recover it. 00:30:38.335 [2024-04-17 06:56:42.682379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.682526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.682553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.335 qpair failed and we were unable to recover it. 00:30:38.335 [2024-04-17 06:56:42.682681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.682839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.682865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.335 qpair failed and we were unable to recover it. 00:30:38.335 [2024-04-17 06:56:42.683063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.683249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.335 [2024-04-17 06:56:42.683291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.335 qpair failed and we were unable to recover it. 00:30:38.335 [2024-04-17 06:56:42.683476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.683631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.683657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.336 qpair failed and we were unable to recover it. 00:30:38.336 [2024-04-17 06:56:42.683880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.684029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.684055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.336 qpair failed and we were unable to recover it. 00:30:38.336 [2024-04-17 06:56:42.684240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.684359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.684386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.336 qpair failed and we were unable to recover it. 00:30:38.336 [2024-04-17 06:56:42.684538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.684751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.684781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.336 qpair failed and we were unable to recover it. 00:30:38.336 [2024-04-17 06:56:42.684927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.685131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.685157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.336 qpair failed and we were unable to recover it. 00:30:38.336 [2024-04-17 06:56:42.685325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.685507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.685533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.336 qpair failed and we were unable to recover it. 00:30:38.336 [2024-04-17 06:56:42.685659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.685815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.685841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.336 qpair failed and we were unable to recover it. 00:30:38.336 [2024-04-17 06:56:42.686061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.686187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.686214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.336 qpair failed and we were unable to recover it. 00:30:38.336 [2024-04-17 06:56:42.686373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.686494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.686535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.336 qpair failed and we were unable to recover it. 00:30:38.336 [2024-04-17 06:56:42.686710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.686881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.686910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.336 qpair failed and we were unable to recover it. 00:30:38.336 [2024-04-17 06:56:42.687119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.687235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.687262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.336 qpair failed and we were unable to recover it. 00:30:38.336 [2024-04-17 06:56:42.687410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.687533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.687559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.336 qpair failed and we were unable to recover it. 00:30:38.336 [2024-04-17 06:56:42.687713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.687893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.687919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.336 qpair failed and we were unable to recover it. 00:30:38.336 [2024-04-17 06:56:42.688144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.688310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.688341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.336 qpair failed and we were unable to recover it. 00:30:38.336 [2024-04-17 06:56:42.688498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.688687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.688729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.336 qpair failed and we were unable to recover it. 00:30:38.336 [2024-04-17 06:56:42.688940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.689105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.689132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.336 qpair failed and we were unable to recover it. 00:30:38.336 [2024-04-17 06:56:42.689334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.689486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.689514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.336 qpair failed and we were unable to recover it. 00:30:38.336 [2024-04-17 06:56:42.689691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.689877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.689903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.336 qpair failed and we were unable to recover it. 00:30:38.336 [2024-04-17 06:56:42.690079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.690267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.690294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.336 qpair failed and we were unable to recover it. 00:30:38.336 [2024-04-17 06:56:42.690441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.690564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.690589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.336 qpair failed and we were unable to recover it. 00:30:38.336 [2024-04-17 06:56:42.690746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.690914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.690941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.336 qpair failed and we were unable to recover it. 00:30:38.336 [2024-04-17 06:56:42.691079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.691266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.691293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.336 qpair failed and we were unable to recover it. 00:30:38.336 [2024-04-17 06:56:42.691454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.691630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.691659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.336 qpair failed and we were unable to recover it. 00:30:38.336 [2024-04-17 06:56:42.691849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.692024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.692053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.336 qpair failed and we were unable to recover it. 00:30:38.336 [2024-04-17 06:56:42.692228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.692380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.692405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.336 qpair failed and we were unable to recover it. 00:30:38.336 [2024-04-17 06:56:42.692573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.692718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.692743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.336 qpair failed and we were unable to recover it. 00:30:38.336 [2024-04-17 06:56:42.692889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.693013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.336 [2024-04-17 06:56:42.693038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.336 qpair failed and we were unable to recover it. 00:30:38.337 [2024-04-17 06:56:42.693190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.693369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.693397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.337 qpair failed and we were unable to recover it. 00:30:38.337 [2024-04-17 06:56:42.693595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.693851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.693909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.337 qpair failed and we were unable to recover it. 00:30:38.337 [2024-04-17 06:56:42.694083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.694242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.694284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.337 qpair failed and we were unable to recover it. 00:30:38.337 [2024-04-17 06:56:42.694436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.694631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.694660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.337 qpair failed and we were unable to recover it. 00:30:38.337 [2024-04-17 06:56:42.694865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.695070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.695098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.337 qpair failed and we were unable to recover it. 00:30:38.337 [2024-04-17 06:56:42.695272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.695393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.695419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.337 qpair failed and we were unable to recover it. 00:30:38.337 [2024-04-17 06:56:42.695608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.695755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.695780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.337 qpair failed and we were unable to recover it. 00:30:38.337 [2024-04-17 06:56:42.695968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.696126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.696154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.337 qpair failed and we were unable to recover it. 00:30:38.337 [2024-04-17 06:56:42.696354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.696530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.696559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.337 qpair failed and we were unable to recover it. 00:30:38.337 [2024-04-17 06:56:42.696724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.696858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.696888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.337 qpair failed and we were unable to recover it. 00:30:38.337 [2024-04-17 06:56:42.697096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.697248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.697275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.337 qpair failed and we were unable to recover it. 00:30:38.337 [2024-04-17 06:56:42.697428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.697599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.697664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.337 qpair failed and we were unable to recover it. 00:30:38.337 [2024-04-17 06:56:42.697826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.697997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.698026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.337 qpair failed and we were unable to recover it. 00:30:38.337 [2024-04-17 06:56:42.698160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.698345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.698374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.337 qpair failed and we were unable to recover it. 00:30:38.337 [2024-04-17 06:56:42.698554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.698702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.698743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.337 qpair failed and we were unable to recover it. 00:30:38.337 [2024-04-17 06:56:42.698917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.699087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.699116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.337 qpair failed and we were unable to recover it. 00:30:38.337 [2024-04-17 06:56:42.699289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.699421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.699449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.337 qpair failed and we were unable to recover it. 00:30:38.337 [2024-04-17 06:56:42.699629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.699779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.699804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.337 qpair failed and we were unable to recover it. 00:30:38.337 [2024-04-17 06:56:42.699958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.700095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.700124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.337 qpair failed and we were unable to recover it. 00:30:38.337 [2024-04-17 06:56:42.700283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.700440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.337 [2024-04-17 06:56:42.700465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-04-17 06:56:42.700654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.700814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.700841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-04-17 06:56:42.701029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.701193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.701223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-04-17 06:56:42.701392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.701564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.701589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-04-17 06:56:42.701748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.701950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.701978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-04-17 06:56:42.702191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.702406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.702432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-04-17 06:56:42.702602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.702781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.702807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-04-17 06:56:42.702961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.703084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.703110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-04-17 06:56:42.703267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.703425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.703450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-04-17 06:56:42.703618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.703828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.703878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-04-17 06:56:42.704061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.704222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.704264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-04-17 06:56:42.704394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.704569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.704597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-04-17 06:56:42.704793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.704974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.704999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-04-17 06:56:42.705157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.705369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.705395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-04-17 06:56:42.705536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.705777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.705802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-04-17 06:56:42.705941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.706134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.706159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-04-17 06:56:42.706440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.706612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.706641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-04-17 06:56:42.706846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.707000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.707030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-04-17 06:56:42.707223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.707397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.707427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-04-17 06:56:42.707606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.707809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.707834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-04-17 06:56:42.708085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.708257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.708283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-04-17 06:56:42.708464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.708828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.708886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-04-17 06:56:42.709096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.709272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.709301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-04-17 06:56:42.709509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.709663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.709688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-04-17 06:56:42.709817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.710021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.710048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-04-17 06:56:42.710198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.710346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.710388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-04-17 06:56:42.710571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.710752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.338 [2024-04-17 06:56:42.710778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.339 [2024-04-17 06:56:42.710939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.711088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.711114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.339 [2024-04-17 06:56:42.711289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.711454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.711498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.339 [2024-04-17 06:56:42.711653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.711820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.711860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.339 [2024-04-17 06:56:42.712032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.712243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.712270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.339 [2024-04-17 06:56:42.712448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.712607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.712632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.339 [2024-04-17 06:56:42.712849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.713010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.713052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.339 [2024-04-17 06:56:42.713201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.713386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.713411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.339 [2024-04-17 06:56:42.713593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.713803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.713828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.339 [2024-04-17 06:56:42.713961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.714166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.714214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.339 [2024-04-17 06:56:42.714385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.714601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.714626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.339 [2024-04-17 06:56:42.714784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.714939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.714981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.339 [2024-04-17 06:56:42.715165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.715352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.715377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.339 [2024-04-17 06:56:42.715575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.715728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.715770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.339 [2024-04-17 06:56:42.715981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.716163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.716195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.339 [2024-04-17 06:56:42.716321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.716475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.716502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.339 [2024-04-17 06:56:42.716664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.716812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.716840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.339 [2024-04-17 06:56:42.717018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.717166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.717199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.339 [2024-04-17 06:56:42.717347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.717473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.717499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.339 [2024-04-17 06:56:42.717733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.717891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.717933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.339 [2024-04-17 06:56:42.718081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.718249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.718278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.339 [2024-04-17 06:56:42.718442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.718615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.718641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.339 [2024-04-17 06:56:42.718799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.719015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.719040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.339 [2024-04-17 06:56:42.719201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.719333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.719358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.339 [2024-04-17 06:56:42.719550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.719730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.719755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.339 [2024-04-17 06:56:42.719903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.720036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.720063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.339 [2024-04-17 06:56:42.720291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.720449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.720475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.339 [2024-04-17 06:56:42.720632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.720816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.720841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.339 [2024-04-17 06:56:42.720987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.721167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.721215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.339 [2024-04-17 06:56:42.721357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.721500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.339 [2024-04-17 06:56:42.721528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.340 [2024-04-17 06:56:42.721661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.721801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.721830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.340 qpair failed and we were unable to recover it. 00:30:38.340 [2024-04-17 06:56:42.722035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.722199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.722226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.340 qpair failed and we were unable to recover it. 00:30:38.340 [2024-04-17 06:56:42.722424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.722563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.722591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.340 qpair failed and we were unable to recover it. 00:30:38.340 [2024-04-17 06:56:42.722760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.722908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.722936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.340 qpair failed and we were unable to recover it. 00:30:38.340 [2024-04-17 06:56:42.723070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.723277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.723303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.340 qpair failed and we were unable to recover it. 00:30:38.340 [2024-04-17 06:56:42.723427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.723550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.723575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.340 qpair failed and we were unable to recover it. 00:30:38.340 [2024-04-17 06:56:42.723761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.723907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.723932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.340 qpair failed and we were unable to recover it. 00:30:38.340 [2024-04-17 06:56:42.724112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.724239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.724266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.340 qpair failed and we were unable to recover it. 00:30:38.340 [2024-04-17 06:56:42.724441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.724616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.724641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.340 qpair failed and we were unable to recover it. 00:30:38.340 [2024-04-17 06:56:42.724770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.724952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.724977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.340 qpair failed and we were unable to recover it. 00:30:38.340 [2024-04-17 06:56:42.725131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.725331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.725360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.340 qpair failed and we were unable to recover it. 00:30:38.340 [2024-04-17 06:56:42.725511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.725672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.725713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.340 qpair failed and we were unable to recover it. 00:30:38.340 [2024-04-17 06:56:42.725896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.726048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.726073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.340 qpair failed and we were unable to recover it. 00:30:38.340 [2024-04-17 06:56:42.726292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.726485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.726532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.340 qpair failed and we were unable to recover it. 00:30:38.340 [2024-04-17 06:56:42.726702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.726851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.726877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.340 qpair failed and we were unable to recover it. 00:30:38.340 [2024-04-17 06:56:42.727060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.727263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.727292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.340 qpair failed and we were unable to recover it. 00:30:38.340 [2024-04-17 06:56:42.727467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.727618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.727643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.340 qpair failed and we were unable to recover it. 00:30:38.340 [2024-04-17 06:56:42.727797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.727961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.727989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.340 qpair failed and we were unable to recover it. 00:30:38.340 [2024-04-17 06:56:42.728184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.728316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.728341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.340 qpair failed and we were unable to recover it. 00:30:38.340 [2024-04-17 06:56:42.728465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.728642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.728670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.340 qpair failed and we were unable to recover it. 00:30:38.340 [2024-04-17 06:56:42.728868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.729050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.729075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.340 qpair failed and we were unable to recover it. 00:30:38.340 [2024-04-17 06:56:42.729234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.729366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.729391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.340 qpair failed and we were unable to recover it. 00:30:38.340 [2024-04-17 06:56:42.729524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.729696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.729724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.340 qpair failed and we were unable to recover it. 00:30:38.340 [2024-04-17 06:56:42.729923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.730094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.730124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.340 qpair failed and we were unable to recover it. 00:30:38.340 [2024-04-17 06:56:42.730285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.730405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.730430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.340 qpair failed and we were unable to recover it. 00:30:38.340 [2024-04-17 06:56:42.730612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.730791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.730816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.340 qpair failed and we were unable to recover it. 00:30:38.340 [2024-04-17 06:56:42.730942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.731091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.731116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.340 qpair failed and we were unable to recover it. 00:30:38.340 [2024-04-17 06:56:42.731271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.731422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.731448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.340 qpair failed and we were unable to recover it. 00:30:38.340 [2024-04-17 06:56:42.731599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.731774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.340 [2024-04-17 06:56:42.731799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.341 qpair failed and we were unable to recover it. 00:30:38.341 [2024-04-17 06:56:42.731953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.732113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.732154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.341 qpair failed and we were unable to recover it. 00:30:38.341 [2024-04-17 06:56:42.732348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.732510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.732551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.341 qpair failed and we were unable to recover it. 00:30:38.341 [2024-04-17 06:56:42.732726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.732925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.732962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.341 qpair failed and we were unable to recover it. 00:30:38.341 [2024-04-17 06:56:42.733156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.733356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.733388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.341 qpair failed and we were unable to recover it. 00:30:38.341 [2024-04-17 06:56:42.733522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.733646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.733671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.341 qpair failed and we were unable to recover it. 00:30:38.341 [2024-04-17 06:56:42.733830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.733959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.733986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.341 qpair failed and we were unable to recover it. 00:30:38.341 [2024-04-17 06:56:42.734136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.734296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.734322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.341 qpair failed and we were unable to recover it. 00:30:38.341 [2024-04-17 06:56:42.734483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.734651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.734679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.341 qpair failed and we were unable to recover it. 00:30:38.341 [2024-04-17 06:56:42.734853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.735005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.735031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.341 qpair failed and we were unable to recover it. 00:30:38.341 [2024-04-17 06:56:42.735196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.735331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.735357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.341 qpair failed and we were unable to recover it. 00:30:38.341 [2024-04-17 06:56:42.735517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.735665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.735690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.341 qpair failed and we were unable to recover it. 00:30:38.341 [2024-04-17 06:56:42.735839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.735956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.735980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.341 qpair failed and we were unable to recover it. 00:30:38.341 [2024-04-17 06:56:42.736140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.736337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.736365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.341 qpair failed and we were unable to recover it. 00:30:38.341 [2024-04-17 06:56:42.736512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.736704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.736732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.341 qpair failed and we were unable to recover it. 00:30:38.341 [2024-04-17 06:56:42.736930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.737101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.737126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.341 qpair failed and we were unable to recover it. 00:30:38.341 [2024-04-17 06:56:42.737289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.737414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.737449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.341 qpair failed and we were unable to recover it. 00:30:38.341 [2024-04-17 06:56:42.737600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.737756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.737782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.341 qpair failed and we were unable to recover it. 00:30:38.341 [2024-04-17 06:56:42.737929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.738096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.738124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.341 qpair failed and we were unable to recover it. 00:30:38.341 [2024-04-17 06:56:42.738302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.738479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.738507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.341 qpair failed and we were unable to recover it. 00:30:38.341 [2024-04-17 06:56:42.738714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.738838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.738864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.341 qpair failed and we were unable to recover it. 00:30:38.341 [2024-04-17 06:56:42.739027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.739150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.739184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.341 qpair failed and we were unable to recover it. 00:30:38.341 [2024-04-17 06:56:42.739342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.739487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.739515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.341 qpair failed and we were unable to recover it. 00:30:38.341 [2024-04-17 06:56:42.739714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.739925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.739950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.341 qpair failed and we were unable to recover it. 00:30:38.341 [2024-04-17 06:56:42.740108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.740320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.740349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.341 qpair failed and we were unable to recover it. 00:30:38.341 [2024-04-17 06:56:42.740520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.740673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.740715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.341 qpair failed and we were unable to recover it. 00:30:38.341 [2024-04-17 06:56:42.740890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.741065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.741094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.341 qpair failed and we were unable to recover it. 00:30:38.341 [2024-04-17 06:56:42.741298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.741465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.741493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.341 qpair failed and we were unable to recover it. 00:30:38.341 [2024-04-17 06:56:42.741675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.741859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.741884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.341 qpair failed and we were unable to recover it. 00:30:38.341 [2024-04-17 06:56:42.742039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.742163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.341 [2024-04-17 06:56:42.742196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.341 qpair failed and we were unable to recover it. 00:30:38.342 [2024-04-17 06:56:42.742353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.742527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.742556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.342 qpair failed and we were unable to recover it. 00:30:38.342 [2024-04-17 06:56:42.742728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.742921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.742949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.342 qpair failed and we were unable to recover it. 00:30:38.342 [2024-04-17 06:56:42.743094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.743281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.743308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.342 qpair failed and we were unable to recover it. 00:30:38.342 [2024-04-17 06:56:42.743431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.743589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.743614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.342 qpair failed and we were unable to recover it. 00:30:38.342 [2024-04-17 06:56:42.743771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.743952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.743977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.342 qpair failed and we were unable to recover it. 00:30:38.342 [2024-04-17 06:56:42.744162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.744324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.744349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.342 qpair failed and we were unable to recover it. 00:30:38.342 [2024-04-17 06:56:42.744475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.744634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.744661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.342 qpair failed and we were unable to recover it. 00:30:38.342 [2024-04-17 06:56:42.744782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.744931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.744956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.342 qpair failed and we were unable to recover it. 00:30:38.342 [2024-04-17 06:56:42.745078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.745232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.745258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.342 qpair failed and we were unable to recover it. 00:30:38.342 [2024-04-17 06:56:42.745444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.745562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.745588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.342 qpair failed and we were unable to recover it. 00:30:38.342 [2024-04-17 06:56:42.745748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.745905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.745931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.342 qpair failed and we were unable to recover it. 00:30:38.342 [2024-04-17 06:56:42.746092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.746245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.746271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.342 qpair failed and we were unable to recover it. 00:30:38.342 [2024-04-17 06:56:42.746422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.746598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.746640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.342 qpair failed and we were unable to recover it. 00:30:38.342 [2024-04-17 06:56:42.746909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.747111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.747136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.342 qpair failed and we were unable to recover it. 00:30:38.342 [2024-04-17 06:56:42.747305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.747468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.747496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.342 qpair failed and we were unable to recover it. 00:30:38.342 [2024-04-17 06:56:42.747656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.747865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.747890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.342 qpair failed and we were unable to recover it. 00:30:38.342 [2024-04-17 06:56:42.748051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.748183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.748224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.342 qpair failed and we were unable to recover it. 00:30:38.342 [2024-04-17 06:56:42.748376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.748547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.748633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.342 qpair failed and we were unable to recover it. 00:30:38.342 [2024-04-17 06:56:42.748811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.748973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.748998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.342 qpair failed and we were unable to recover it. 00:30:38.342 [2024-04-17 06:56:42.749157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.749291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.749316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.342 qpair failed and we were unable to recover it. 00:30:38.342 [2024-04-17 06:56:42.749434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.749643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.749711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.342 qpair failed and we were unable to recover it. 00:30:38.342 [2024-04-17 06:56:42.749925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.750081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.750106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.342 qpair failed and we were unable to recover it. 00:30:38.342 [2024-04-17 06:56:42.750259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.750416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.750441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.342 qpair failed and we were unable to recover it. 00:30:38.342 [2024-04-17 06:56:42.750569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.750721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.342 [2024-04-17 06:56:42.750746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.343 qpair failed and we were unable to recover it. 00:30:38.343 [2024-04-17 06:56:42.750897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.751022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.751047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.343 qpair failed and we were unable to recover it. 00:30:38.343 [2024-04-17 06:56:42.751195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.751313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.751338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.343 qpair failed and we were unable to recover it. 00:30:38.343 [2024-04-17 06:56:42.751493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.751794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.751846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.343 qpair failed and we were unable to recover it. 00:30:38.343 [2024-04-17 06:56:42.752061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.752218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.752245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.343 qpair failed and we were unable to recover it. 00:30:38.343 [2024-04-17 06:56:42.752369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.752547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.752572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.343 qpair failed and we were unable to recover it. 00:30:38.343 [2024-04-17 06:56:42.752749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.752872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.752897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.343 qpair failed and we were unable to recover it. 00:30:38.343 [2024-04-17 06:56:42.753018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.753248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.753274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.343 qpair failed and we were unable to recover it. 00:30:38.343 [2024-04-17 06:56:42.753431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.753604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.753667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.343 qpair failed and we were unable to recover it. 00:30:38.343 [2024-04-17 06:56:42.753833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.754029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.754057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.343 qpair failed and we were unable to recover it. 00:30:38.343 [2024-04-17 06:56:42.754242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.754378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.754408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.343 qpair failed and we were unable to recover it. 00:30:38.343 [2024-04-17 06:56:42.754627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.754945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.754999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.343 qpair failed and we were unable to recover it. 00:30:38.343 [2024-04-17 06:56:42.755209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.755335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.755360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.343 qpair failed and we were unable to recover it. 00:30:38.343 [2024-04-17 06:56:42.755491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.755691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.755720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.343 qpair failed and we were unable to recover it. 00:30:38.343 [2024-04-17 06:56:42.755985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.756154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.756200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.343 qpair failed and we were unable to recover it. 00:30:38.343 [2024-04-17 06:56:42.756384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.756533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.756558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.343 qpair failed and we were unable to recover it. 00:30:38.343 [2024-04-17 06:56:42.756728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.756863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.756890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.343 qpair failed and we were unable to recover it. 00:30:38.343 [2024-04-17 06:56:42.757076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.757202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.757227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.343 qpair failed and we were unable to recover it. 00:30:38.343 [2024-04-17 06:56:42.757378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.757556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.757581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.343 qpair failed and we were unable to recover it. 00:30:38.343 [2024-04-17 06:56:42.757766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.757904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.757932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.343 qpair failed and we were unable to recover it. 00:30:38.343 [2024-04-17 06:56:42.758131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.758313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.758339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.343 qpair failed and we were unable to recover it. 00:30:38.343 [2024-04-17 06:56:42.758462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.758675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.758702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.343 qpair failed and we were unable to recover it. 00:30:38.343 [2024-04-17 06:56:42.758899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.759065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.759093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.343 qpair failed and we were unable to recover it. 00:30:38.343 [2024-04-17 06:56:42.759296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.759431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.759458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.343 qpair failed and we were unable to recover it. 00:30:38.343 [2024-04-17 06:56:42.759647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.759771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.759797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.343 qpair failed and we were unable to recover it. 00:30:38.343 [2024-04-17 06:56:42.759959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.760115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.760140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.343 qpair failed and we were unable to recover it. 00:30:38.343 [2024-04-17 06:56:42.760312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.760576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.760630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.343 qpair failed and we were unable to recover it. 00:30:38.343 [2024-04-17 06:56:42.760909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.761123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.761148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.343 qpair failed and we were unable to recover it. 00:30:38.343 [2024-04-17 06:56:42.761294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.761430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.761455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.343 qpair failed and we were unable to recover it. 00:30:38.343 [2024-04-17 06:56:42.761627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.343 [2024-04-17 06:56:42.761896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.761948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.344 qpair failed and we were unable to recover it. 00:30:38.344 [2024-04-17 06:56:42.762120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.762280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.762308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.344 qpair failed and we were unable to recover it. 00:30:38.344 [2024-04-17 06:56:42.762521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.762666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.762691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.344 qpair failed and we were unable to recover it. 00:30:38.344 [2024-04-17 06:56:42.762836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.763002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.763029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.344 qpair failed and we were unable to recover it. 00:30:38.344 [2024-04-17 06:56:42.763205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.763335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.763377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.344 qpair failed and we were unable to recover it. 00:30:38.344 [2024-04-17 06:56:42.763635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.763944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.763995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.344 qpair failed and we were unable to recover it. 00:30:38.344 [2024-04-17 06:56:42.764141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.764331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.764359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.344 qpair failed and we were unable to recover it. 00:30:38.344 [2024-04-17 06:56:42.764494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.764673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.764700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.344 qpair failed and we were unable to recover it. 00:30:38.344 [2024-04-17 06:56:42.764876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.765005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.765045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.344 qpair failed and we were unable to recover it. 00:30:38.344 [2024-04-17 06:56:42.765285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.765429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.765457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.344 qpair failed and we were unable to recover it. 00:30:38.344 [2024-04-17 06:56:42.765630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.765822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.765885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.344 qpair failed and we were unable to recover it. 00:30:38.344 [2024-04-17 06:56:42.766103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.766245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.766272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.344 qpair failed and we were unable to recover it. 00:30:38.344 [2024-04-17 06:56:42.766455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.766617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.766642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.344 qpair failed and we were unable to recover it. 00:30:38.344 [2024-04-17 06:56:42.766843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.767076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.767103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.344 qpair failed and we were unable to recover it. 00:30:38.344 [2024-04-17 06:56:42.767271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.767424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.767449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.344 qpair failed and we were unable to recover it. 00:30:38.344 [2024-04-17 06:56:42.767649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.767807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.767836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.344 qpair failed and we were unable to recover it. 00:30:38.344 [2024-04-17 06:56:42.767961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.768092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.768117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.344 qpair failed and we were unable to recover it. 00:30:38.344 [2024-04-17 06:56:42.768316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.768456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.768493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.344 qpair failed and we were unable to recover it. 00:30:38.344 [2024-04-17 06:56:42.768619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.768802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.768827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.344 qpair failed and we were unable to recover it. 00:30:38.344 [2024-04-17 06:56:42.769004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.769185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.769213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.344 qpair failed and we were unable to recover it. 00:30:38.344 [2024-04-17 06:56:42.769363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.769519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.769544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.344 qpair failed and we were unable to recover it. 00:30:38.344 [2024-04-17 06:56:42.769735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.769896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.769921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.344 qpair failed and we were unable to recover it. 00:30:38.344 [2024-04-17 06:56:42.770109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.770326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.770352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.344 qpair failed and we were unable to recover it. 00:30:38.344 [2024-04-17 06:56:42.770526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.770719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.770747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.344 qpair failed and we were unable to recover it. 00:30:38.344 [2024-04-17 06:56:42.770890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.771050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.771076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.344 qpair failed and we were unable to recover it. 00:30:38.344 [2024-04-17 06:56:42.771211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.771426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.771451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.344 qpair failed and we were unable to recover it. 00:30:38.344 [2024-04-17 06:56:42.771627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.771782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.771807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.344 qpair failed and we were unable to recover it. 00:30:38.344 [2024-04-17 06:56:42.772001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.772202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.772246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.344 qpair failed and we were unable to recover it. 00:30:38.344 [2024-04-17 06:56:42.772425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.772578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.772605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.344 qpair failed and we were unable to recover it. 00:30:38.344 [2024-04-17 06:56:42.772893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.344 [2024-04-17 06:56:42.773125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.773153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.345 qpair failed and we were unable to recover it. 00:30:38.345 [2024-04-17 06:56:42.773356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.773511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.773550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.345 qpair failed and we were unable to recover it. 00:30:38.345 [2024-04-17 06:56:42.773727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.773934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.773961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.345 qpair failed and we were unable to recover it. 00:30:38.345 [2024-04-17 06:56:42.774130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.774302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.774328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.345 qpair failed and we were unable to recover it. 00:30:38.345 [2024-04-17 06:56:42.774487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.774616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.774641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.345 qpair failed and we were unable to recover it. 00:30:38.345 [2024-04-17 06:56:42.774808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.774955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.774983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.345 qpair failed and we were unable to recover it. 00:30:38.345 [2024-04-17 06:56:42.775198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.775403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.775429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.345 qpair failed and we were unable to recover it. 00:30:38.345 [2024-04-17 06:56:42.775614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.775755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.775783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.345 qpair failed and we were unable to recover it. 00:30:38.345 [2024-04-17 06:56:42.775964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.776706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.776739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.345 qpair failed and we were unable to recover it. 00:30:38.345 [2024-04-17 06:56:42.776955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.777141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.777184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.345 qpair failed and we were unable to recover it. 00:30:38.345 [2024-04-17 06:56:42.777350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.777487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.777512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.345 qpair failed and we were unable to recover it. 00:30:38.345 [2024-04-17 06:56:42.777720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.777912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.777978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.345 qpair failed and we were unable to recover it. 00:30:38.345 [2024-04-17 06:56:42.778122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.778319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.778345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.345 qpair failed and we were unable to recover it. 00:30:38.345 [2024-04-17 06:56:42.778487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.778717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.778781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.345 qpair failed and we were unable to recover it. 00:30:38.345 [2024-04-17 06:56:42.778932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.779131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.779158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.345 qpair failed and we were unable to recover it. 00:30:38.345 [2024-04-17 06:56:42.779316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.779492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.779520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.345 qpair failed and we were unable to recover it. 00:30:38.345 [2024-04-17 06:56:42.779731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.779942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.779990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.345 qpair failed and we were unable to recover it. 00:30:38.345 [2024-04-17 06:56:42.780129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.780299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.780325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.345 qpair failed and we were unable to recover it. 00:30:38.345 [2024-04-17 06:56:42.780494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.780668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.780696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.345 qpair failed and we were unable to recover it. 00:30:38.345 [2024-04-17 06:56:42.780986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.781286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.781312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.345 qpair failed and we were unable to recover it. 00:30:38.345 [2024-04-17 06:56:42.781461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.781646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.781675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.345 qpair failed and we were unable to recover it. 00:30:38.345 [2024-04-17 06:56:42.781871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.782037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.782065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.345 qpair failed and we were unable to recover it. 00:30:38.345 [2024-04-17 06:56:42.782207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.782354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.782379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.345 qpair failed and we were unable to recover it. 00:30:38.345 [2024-04-17 06:56:42.782607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.782906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.782955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.345 qpair failed and we were unable to recover it. 00:30:38.345 [2024-04-17 06:56:42.783132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.783327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.783354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.345 qpair failed and we were unable to recover it. 00:30:38.345 [2024-04-17 06:56:42.783530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.783768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.783796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.345 qpair failed and we were unable to recover it. 00:30:38.345 [2024-04-17 06:56:42.783966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.784137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.784184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.345 qpair failed and we were unable to recover it. 00:30:38.345 [2024-04-17 06:56:42.784389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.784699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.784750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.345 qpair failed and we were unable to recover it. 00:30:38.345 [2024-04-17 06:56:42.784965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.785142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.785186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.345 qpair failed and we were unable to recover it. 00:30:38.345 [2024-04-17 06:56:42.785355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.345 [2024-04-17 06:56:42.785500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.785524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.346 qpair failed and we were unable to recover it. 00:30:38.346 [2024-04-17 06:56:42.785708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.785878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.785905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.346 qpair failed and we were unable to recover it. 00:30:38.346 [2024-04-17 06:56:42.786072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.786258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.786285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.346 qpair failed and we were unable to recover it. 00:30:38.346 [2024-04-17 06:56:42.786439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.786656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.786684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.346 qpair failed and we were unable to recover it. 00:30:38.346 [2024-04-17 06:56:42.786973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.787217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.787258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.346 qpair failed and we were unable to recover it. 00:30:38.346 [2024-04-17 06:56:42.787422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.787694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.787745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.346 qpair failed and we were unable to recover it. 00:30:38.346 [2024-04-17 06:56:42.787939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.788121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.788146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.346 qpair failed and we were unable to recover it. 00:30:38.346 [2024-04-17 06:56:42.788354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.788562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.788590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.346 qpair failed and we were unable to recover it. 00:30:38.346 [2024-04-17 06:56:42.788749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.788942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.789000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.346 qpair failed and we were unable to recover it. 00:30:38.346 [2024-04-17 06:56:42.789187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.789358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.789383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.346 qpair failed and we were unable to recover it. 00:30:38.346 [2024-04-17 06:56:42.789593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.789747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.789787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.346 qpair failed and we were unable to recover it. 00:30:38.346 [2024-04-17 06:56:42.789943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.790141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.790186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.346 qpair failed and we were unable to recover it. 00:30:38.346 [2024-04-17 06:56:42.790367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.790557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.790584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.346 qpair failed and we were unable to recover it. 00:30:38.346 [2024-04-17 06:56:42.790737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.790906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.790934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.346 qpair failed and we were unable to recover it. 00:30:38.346 [2024-04-17 06:56:42.791105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.791317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.791343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.346 qpair failed and we were unable to recover it. 00:30:38.346 [2024-04-17 06:56:42.791501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.791663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.791689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.346 qpair failed and we were unable to recover it. 00:30:38.346 [2024-04-17 06:56:42.791954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.792197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.792226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.346 qpair failed and we were unable to recover it. 00:30:38.346 [2024-04-17 06:56:42.792399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.792563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.792591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.346 qpair failed and we were unable to recover it. 00:30:38.346 [2024-04-17 06:56:42.792935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.793125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.793152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.346 qpair failed and we were unable to recover it. 00:30:38.346 [2024-04-17 06:56:42.793343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.793504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.793545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.346 qpair failed and we were unable to recover it. 00:30:38.346 [2024-04-17 06:56:42.793702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.793857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.793882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.346 qpair failed and we were unable to recover it. 00:30:38.346 [2024-04-17 06:56:42.794032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.794256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.794282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.346 qpair failed and we were unable to recover it. 00:30:38.346 [2024-04-17 06:56:42.794407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.794585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.794610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.346 qpair failed and we were unable to recover it. 00:30:38.346 [2024-04-17 06:56:42.794793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.794918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.794943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.346 qpair failed and we were unable to recover it. 00:30:38.346 [2024-04-17 06:56:42.795124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.795297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.795323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.346 qpair failed and we were unable to recover it. 00:30:38.346 [2024-04-17 06:56:42.795469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.795593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.795618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.346 qpair failed and we were unable to recover it. 00:30:38.346 [2024-04-17 06:56:42.795780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.795935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.346 [2024-04-17 06:56:42.795961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.346 qpair failed and we were unable to recover it. 00:30:38.346 [2024-04-17 06:56:42.796083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.796267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.796293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.347 qpair failed and we were unable to recover it. 00:30:38.347 [2024-04-17 06:56:42.796452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.796582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.796607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.347 qpair failed and we were unable to recover it. 00:30:38.347 [2024-04-17 06:56:42.796760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.796905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.796935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.347 qpair failed and we were unable to recover it. 00:30:38.347 [2024-04-17 06:56:42.797135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.797329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.797355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.347 qpair failed and we were unable to recover it. 00:30:38.347 [2024-04-17 06:56:42.797510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.797689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.797714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.347 qpair failed and we were unable to recover it. 00:30:38.347 [2024-04-17 06:56:42.797962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.798146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.798207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.347 qpair failed and we were unable to recover it. 00:30:38.347 [2024-04-17 06:56:42.798349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.798479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.798521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.347 qpair failed and we were unable to recover it. 00:30:38.347 [2024-04-17 06:56:42.798671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.798809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.798839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.347 qpair failed and we were unable to recover it. 00:30:38.347 [2024-04-17 06:56:42.799045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.799224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.799250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.347 qpair failed and we were unable to recover it. 00:30:38.347 [2024-04-17 06:56:42.799402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.799599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.799624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.347 qpair failed and we were unable to recover it. 00:30:38.347 [2024-04-17 06:56:42.799757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.799941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.799982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.347 qpair failed and we were unable to recover it. 00:30:38.347 [2024-04-17 06:56:42.800114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.800314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.800339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.347 qpair failed and we were unable to recover it. 00:30:38.347 [2024-04-17 06:56:42.800472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.800628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.800653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.347 qpair failed and we were unable to recover it. 00:30:38.347 [2024-04-17 06:56:42.800939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.801135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.801184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.347 qpair failed and we were unable to recover it. 00:30:38.347 [2024-04-17 06:56:42.801334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.801489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.801514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.347 qpair failed and we were unable to recover it. 00:30:38.347 [2024-04-17 06:56:42.801690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.801992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.802065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.347 qpair failed and we were unable to recover it. 00:30:38.347 [2024-04-17 06:56:42.802286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.802515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.802573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.347 qpair failed and we were unable to recover it. 00:30:38.347 [2024-04-17 06:56:42.802849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.803024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.803052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.347 qpair failed and we were unable to recover it. 00:30:38.347 [2024-04-17 06:56:42.803266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.803416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.803442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.347 qpair failed and we were unable to recover it. 00:30:38.347 [2024-04-17 06:56:42.803645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.803836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.803901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.347 qpair failed and we were unable to recover it. 00:30:38.347 [2024-04-17 06:56:42.804061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.804197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.804223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.347 qpair failed and we were unable to recover it. 00:30:38.347 [2024-04-17 06:56:42.804361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.804507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.804536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.347 qpair failed and we were unable to recover it. 00:30:38.347 [2024-04-17 06:56:42.804697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.804885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.804913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.347 qpair failed and we were unable to recover it. 00:30:38.347 [2024-04-17 06:56:42.805084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.805264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.805290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.347 qpair failed and we were unable to recover it. 00:30:38.347 [2024-04-17 06:56:42.805483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.805646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.805696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.347 qpair failed and we were unable to recover it. 00:30:38.347 [2024-04-17 06:56:42.805963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.806159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.806197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.347 qpair failed and we were unable to recover it. 00:30:38.347 [2024-04-17 06:56:42.806344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.347 [2024-04-17 06:56:42.806496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.806524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.348 qpair failed and we were unable to recover it. 00:30:38.348 [2024-04-17 06:56:42.806690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.806858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.806886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.348 qpair failed and we were unable to recover it. 00:30:38.348 [2024-04-17 06:56:42.807060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.807243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.807269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.348 qpair failed and we were unable to recover it. 00:30:38.348 [2024-04-17 06:56:42.807419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.807581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.807608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.348 qpair failed and we were unable to recover it. 00:30:38.348 [2024-04-17 06:56:42.807756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.807950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.807978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.348 qpair failed and we were unable to recover it. 00:30:38.348 [2024-04-17 06:56:42.808142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.808359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.808384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.348 qpair failed and we were unable to recover it. 00:30:38.348 [2024-04-17 06:56:42.808509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.808639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.808668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.348 qpair failed and we were unable to recover it. 00:30:38.348 [2024-04-17 06:56:42.808907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.809082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.809110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.348 qpair failed and we were unable to recover it. 00:30:38.348 [2024-04-17 06:56:42.809288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.809414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.809439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.348 qpair failed and we were unable to recover it. 00:30:38.348 [2024-04-17 06:56:42.809659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.809792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.809820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.348 qpair failed and we were unable to recover it. 00:30:38.348 [2024-04-17 06:56:42.810007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.810219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.810253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.348 qpair failed and we were unable to recover it. 00:30:38.348 [2024-04-17 06:56:42.810490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.810685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.810714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.348 qpair failed and we were unable to recover it. 00:30:38.348 [2024-04-17 06:56:42.810959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.811127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.811155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.348 qpair failed and we were unable to recover it. 00:30:38.348 [2024-04-17 06:56:42.811334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.811526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.811586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.348 qpair failed and we were unable to recover it. 00:30:38.348 [2024-04-17 06:56:42.811793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.811973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.812001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.348 qpair failed and we were unable to recover it. 00:30:38.348 [2024-04-17 06:56:42.812192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.812376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.812401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.348 qpair failed and we were unable to recover it. 00:30:38.348 [2024-04-17 06:56:42.812582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.812753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.812785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.348 qpair failed and we were unable to recover it. 00:30:38.348 [2024-04-17 06:56:42.812981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.813186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.813215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.348 qpair failed and we were unable to recover it. 00:30:38.348 [2024-04-17 06:56:42.813414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.813594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.813621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.348 qpair failed and we were unable to recover it. 00:30:38.348 [2024-04-17 06:56:42.813788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.813957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.813984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.348 qpair failed and we were unable to recover it. 00:30:38.348 [2024-04-17 06:56:42.814153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.814344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.814369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.348 qpair failed and we were unable to recover it. 00:30:38.348 [2024-04-17 06:56:42.814572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.814760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.814826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.348 qpair failed and we were unable to recover it. 00:30:38.348 [2024-04-17 06:56:42.815000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.815199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.815227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.348 qpair failed and we were unable to recover it. 00:30:38.348 [2024-04-17 06:56:42.815398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.815612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.815677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.348 qpair failed and we were unable to recover it. 00:30:38.348 [2024-04-17 06:56:42.815930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.816143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.816185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.348 qpair failed and we were unable to recover it. 00:30:38.348 [2024-04-17 06:56:42.816365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.816575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.816640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.348 qpair failed and we were unable to recover it. 00:30:38.348 [2024-04-17 06:56:42.816826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.817026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.817054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.348 qpair failed and we were unable to recover it. 00:30:38.348 [2024-04-17 06:56:42.817270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.817471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.817498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.348 qpair failed and we were unable to recover it. 00:30:38.348 [2024-04-17 06:56:42.817637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.817907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.817958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.348 qpair failed and we were unable to recover it. 00:30:38.348 [2024-04-17 06:56:42.818145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.348 [2024-04-17 06:56:42.818321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.818347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.349 qpair failed and we were unable to recover it. 00:30:38.349 [2024-04-17 06:56:42.818469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.818707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.818748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.349 qpair failed and we were unable to recover it. 00:30:38.349 [2024-04-17 06:56:42.818917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.819161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.819199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.349 qpair failed and we were unable to recover it. 00:30:38.349 [2024-04-17 06:56:42.819333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.819477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.819505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.349 qpair failed and we were unable to recover it. 00:30:38.349 [2024-04-17 06:56:42.819666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.819833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.819860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.349 qpair failed and we were unable to recover it. 00:30:38.349 [2024-04-17 06:56:42.820035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.820216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.820245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.349 qpair failed and we were unable to recover it. 00:30:38.349 [2024-04-17 06:56:42.820409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.820570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.820597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.349 qpair failed and we were unable to recover it. 00:30:38.349 [2024-04-17 06:56:42.820777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.820916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.820944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.349 qpair failed and we were unable to recover it. 00:30:38.349 [2024-04-17 06:56:42.821126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.821279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.821308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.349 qpair failed and we were unable to recover it. 00:30:38.349 [2024-04-17 06:56:42.821477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.821677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.821705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.349 qpair failed and we were unable to recover it. 00:30:38.349 [2024-04-17 06:56:42.821880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.822025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.822052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.349 qpair failed and we were unable to recover it. 00:30:38.349 [2024-04-17 06:56:42.822187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.822385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.822413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.349 qpair failed and we were unable to recover it. 00:30:38.349 [2024-04-17 06:56:42.822552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.822690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.822718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.349 qpair failed and we were unable to recover it. 00:30:38.349 [2024-04-17 06:56:42.822897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.823056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.823081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.349 qpair failed and we were unable to recover it. 00:30:38.349 [2024-04-17 06:56:42.823243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.823372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.823397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.349 qpair failed and we were unable to recover it. 00:30:38.349 [2024-04-17 06:56:42.823609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.823758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.823799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.349 qpair failed and we were unable to recover it. 00:30:38.349 [2024-04-17 06:56:42.824006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.824160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.824193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.349 qpair failed and we were unable to recover it. 00:30:38.349 [2024-04-17 06:56:42.824380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.824539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.824564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.349 qpair failed and we were unable to recover it. 00:30:38.349 [2024-04-17 06:56:42.824712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.824864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.824889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.349 qpair failed and we were unable to recover it. 00:30:38.349 [2024-04-17 06:56:42.825047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.825283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.825309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.349 qpair failed and we were unable to recover it. 00:30:38.349 [2024-04-17 06:56:42.825516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.825720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.825747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.349 qpair failed and we were unable to recover it. 00:30:38.349 [2024-04-17 06:56:42.825925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.826082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.826125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.349 qpair failed and we were unable to recover it. 00:30:38.349 [2024-04-17 06:56:42.826321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.826572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.826599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.349 qpair failed and we were unable to recover it. 00:30:38.349 [2024-04-17 06:56:42.826778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.826987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.827012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.349 qpair failed and we were unable to recover it. 00:30:38.349 [2024-04-17 06:56:42.827198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.827338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.827364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.349 qpair failed and we were unable to recover it. 00:30:38.349 [2024-04-17 06:56:42.827487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.827642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.827667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.349 qpair failed and we were unable to recover it. 00:30:38.349 [2024-04-17 06:56:42.827849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.828093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.828122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.349 qpair failed and we were unable to recover it. 00:30:38.349 [2024-04-17 06:56:42.828276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.828443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.828478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.349 qpair failed and we were unable to recover it. 00:30:38.349 [2024-04-17 06:56:42.828621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.828792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.828820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.349 qpair failed and we were unable to recover it. 00:30:38.349 [2024-04-17 06:56:42.828985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.349 [2024-04-17 06:56:42.829171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.829217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.350 qpair failed and we were unable to recover it. 00:30:38.350 [2024-04-17 06:56:42.829418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.829582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.829620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.350 qpair failed and we were unable to recover it. 00:30:38.350 [2024-04-17 06:56:42.829831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.829967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.829993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.350 qpair failed and we were unable to recover it. 00:30:38.350 [2024-04-17 06:56:42.830146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.830332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.830361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.350 qpair failed and we were unable to recover it. 00:30:38.350 [2024-04-17 06:56:42.830542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.830748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.830784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.350 qpair failed and we were unable to recover it. 00:30:38.350 [2024-04-17 06:56:42.830925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.831812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.831857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.350 qpair failed and we were unable to recover it. 00:30:38.350 [2024-04-17 06:56:42.832048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.832191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.832221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.350 qpair failed and we were unable to recover it. 00:30:38.350 [2024-04-17 06:56:42.832367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.832553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.832582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.350 qpair failed and we were unable to recover it. 00:30:38.350 [2024-04-17 06:56:42.832743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.832892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.832917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.350 qpair failed and we were unable to recover it. 00:30:38.350 [2024-04-17 06:56:42.833102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.833286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.833319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.350 qpair failed and we were unable to recover it. 00:30:38.350 [2024-04-17 06:56:42.833448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.833670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.833695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.350 qpair failed and we were unable to recover it. 00:30:38.350 [2024-04-17 06:56:42.833819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.833999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.834040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.350 qpair failed and we were unable to recover it. 00:30:38.350 [2024-04-17 06:56:42.834230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.834391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.834416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.350 qpair failed and we were unable to recover it. 00:30:38.350 [2024-04-17 06:56:42.834605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.834755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.834782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.350 qpair failed and we were unable to recover it. 00:30:38.350 [2024-04-17 06:56:42.834958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.835096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.835124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.350 qpair failed and we were unable to recover it. 00:30:38.350 [2024-04-17 06:56:42.835264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.835474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.835502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.350 qpair failed and we were unable to recover it. 00:30:38.350 [2024-04-17 06:56:42.835653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.835817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.835857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.350 qpair failed and we were unable to recover it. 00:30:38.350 [2024-04-17 06:56:42.835994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.836202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.836231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.350 qpair failed and we were unable to recover it. 00:30:38.350 [2024-04-17 06:56:42.836401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.836564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.836589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.350 qpair failed and we were unable to recover it. 00:30:38.350 [2024-04-17 06:56:42.836741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.836899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.836927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.350 qpair failed and we were unable to recover it. 00:30:38.350 [2024-04-17 06:56:42.837076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.837236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.837278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.350 qpair failed and we were unable to recover it. 00:30:38.350 [2024-04-17 06:56:42.837445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.837601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.837629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.350 qpair failed and we were unable to recover it. 00:30:38.350 [2024-04-17 06:56:42.837763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.837927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.837955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.350 qpair failed and we were unable to recover it. 00:30:38.350 [2024-04-17 06:56:42.838147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.838285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.838311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.350 qpair failed and we were unable to recover it. 00:30:38.350 [2024-04-17 06:56:42.838437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.838584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.838608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.350 qpair failed and we were unable to recover it. 00:30:38.350 [2024-04-17 06:56:42.838822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.838969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.838996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.350 qpair failed and we were unable to recover it. 00:30:38.350 [2024-04-17 06:56:42.839131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.839299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.839328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.350 qpair failed and we were unable to recover it. 00:30:38.350 [2024-04-17 06:56:42.839494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.839716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.839765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.350 qpair failed and we were unable to recover it. 00:30:38.350 [2024-04-17 06:56:42.839976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.840186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.840215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.350 qpair failed and we were unable to recover it. 00:30:38.350 [2024-04-17 06:56:42.840362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.350 [2024-04-17 06:56:42.840548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.840576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.351 qpair failed and we were unable to recover it. 00:30:38.351 [2024-04-17 06:56:42.840779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.840912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.840941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.351 qpair failed and we were unable to recover it. 00:30:38.351 [2024-04-17 06:56:42.841124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.841284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.841310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.351 qpair failed and we were unable to recover it. 00:30:38.351 [2024-04-17 06:56:42.841476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.841600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.841641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.351 qpair failed and we were unable to recover it. 00:30:38.351 [2024-04-17 06:56:42.841938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.842106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.842133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.351 qpair failed and we were unable to recover it. 00:30:38.351 [2024-04-17 06:56:42.842328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.842467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.842497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.351 qpair failed and we were unable to recover it. 00:30:38.351 [2024-04-17 06:56:42.842678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.842834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.842877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.351 qpair failed and we were unable to recover it. 00:30:38.351 [2024-04-17 06:56:42.843052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.843214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.843240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.351 qpair failed and we were unable to recover it. 00:30:38.351 [2024-04-17 06:56:42.843364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.843491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.843516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.351 qpair failed and we were unable to recover it. 00:30:38.351 [2024-04-17 06:56:42.843644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.843799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.843826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.351 qpair failed and we were unable to recover it. 00:30:38.351 [2024-04-17 06:56:42.843980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.844162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.844208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.351 qpair failed and we were unable to recover it. 00:30:38.351 [2024-04-17 06:56:42.844368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.844556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.844584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.351 qpair failed and we were unable to recover it. 00:30:38.351 [2024-04-17 06:56:42.844772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.844941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.844966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.351 qpair failed and we were unable to recover it. 00:30:38.351 [2024-04-17 06:56:42.845098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.845297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.845325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.351 qpair failed and we were unable to recover it. 00:30:38.351 [2024-04-17 06:56:42.845454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.845625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.845665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.351 qpair failed and we were unable to recover it. 00:30:38.351 [2024-04-17 06:56:42.845809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.845965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.845990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.351 qpair failed and we were unable to recover it. 00:30:38.351 [2024-04-17 06:56:42.846157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.846322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.846349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.351 qpair failed and we were unable to recover it. 00:30:38.351 [2024-04-17 06:56:42.846528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.846699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.846726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.351 qpair failed and we were unable to recover it. 00:30:38.351 [2024-04-17 06:56:42.847006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.847201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.847230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.351 qpair failed and we were unable to recover it. 00:30:38.351 [2024-04-17 06:56:42.847403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.847587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.847653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.351 qpair failed and we were unable to recover it. 00:30:38.351 [2024-04-17 06:56:42.847836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.847961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.847986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.351 qpair failed and we were unable to recover it. 00:30:38.351 [2024-04-17 06:56:42.848152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.848363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.848389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.351 qpair failed and we were unable to recover it. 00:30:38.351 [2024-04-17 06:56:42.848548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.848677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.848702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.351 qpair failed and we were unable to recover it. 00:30:38.351 [2024-04-17 06:56:42.848876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.849050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.849078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.351 qpair failed and we were unable to recover it. 00:30:38.351 [2024-04-17 06:56:42.849254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.849403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.351 [2024-04-17 06:56:42.849431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.351 qpair failed and we were unable to recover it. 00:30:38.352 [2024-04-17 06:56:42.849605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.849777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.849805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.352 qpair failed and we were unable to recover it. 00:30:38.352 [2024-04-17 06:56:42.849985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.850139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.850181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.352 qpair failed and we were unable to recover it. 00:30:38.352 [2024-04-17 06:56:42.850310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.850441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.850475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.352 qpair failed and we were unable to recover it. 00:30:38.352 [2024-04-17 06:56:42.850622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.850797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.850822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.352 qpair failed and we were unable to recover it. 00:30:38.352 [2024-04-17 06:56:42.850983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.851179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.851205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.352 qpair failed and we were unable to recover it. 00:30:38.352 [2024-04-17 06:56:42.851362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.851578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.851603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.352 qpair failed and we were unable to recover it. 00:30:38.352 [2024-04-17 06:56:42.851760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.851912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.851942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.352 qpair failed and we were unable to recover it. 00:30:38.352 [2024-04-17 06:56:42.852068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.852200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.852226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.352 qpair failed and we were unable to recover it. 00:30:38.352 [2024-04-17 06:56:42.852411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.852568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.852593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.352 qpair failed and we were unable to recover it. 00:30:38.352 [2024-04-17 06:56:42.852804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.852934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.852959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.352 qpair failed and we were unable to recover it. 00:30:38.352 [2024-04-17 06:56:42.853150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.853318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.853344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.352 qpair failed and we were unable to recover it. 00:30:38.352 [2024-04-17 06:56:42.853484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.853677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.853702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.352 qpair failed and we were unable to recover it. 00:30:38.352 [2024-04-17 06:56:42.853851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.854007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.854034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.352 qpair failed and we were unable to recover it. 00:30:38.352 [2024-04-17 06:56:42.854228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.854390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.854415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.352 qpair failed and we were unable to recover it. 00:30:38.352 [2024-04-17 06:56:42.854558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.854708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.854750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.352 qpair failed and we were unable to recover it. 00:30:38.352 [2024-04-17 06:56:42.854898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.855060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.855087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.352 qpair failed and we were unable to recover it. 00:30:38.352 [2024-04-17 06:56:42.855253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.855412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.855439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.352 qpair failed and we were unable to recover it. 00:30:38.352 [2024-04-17 06:56:42.855621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.855849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.855912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.352 qpair failed and we were unable to recover it. 00:30:38.352 [2024-04-17 06:56:42.856072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.856212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.856254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.352 qpair failed and we were unable to recover it. 00:30:38.352 [2024-04-17 06:56:42.856439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.856608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.856636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.352 qpair failed and we were unable to recover it. 00:30:38.352 [2024-04-17 06:56:42.856769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.856899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.856927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.352 qpair failed and we were unable to recover it. 00:30:38.352 [2024-04-17 06:56:42.857106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.857260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.857286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.352 qpair failed and we were unable to recover it. 00:30:38.352 [2024-04-17 06:56:42.857410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.857579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.857622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.352 qpair failed and we were unable to recover it. 00:30:38.352 [2024-04-17 06:56:42.857861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.857997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.858026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.352 qpair failed and we were unable to recover it. 00:30:38.352 [2024-04-17 06:56:42.858198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.858330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.858359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.352 qpair failed and we were unable to recover it. 00:30:38.352 [2024-04-17 06:56:42.858559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.858724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.858752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.352 qpair failed and we were unable to recover it. 00:30:38.352 [2024-04-17 06:56:42.858897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.859054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.859096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.352 qpair failed and we were unable to recover it. 00:30:38.352 [2024-04-17 06:56:42.859281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.859435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.859460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.352 qpair failed and we were unable to recover it. 00:30:38.352 [2024-04-17 06:56:42.859637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.859809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.352 [2024-04-17 06:56:42.859837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.352 qpair failed and we were unable to recover it. 00:30:38.353 [2024-04-17 06:56:42.860002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.860195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.860223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.353 qpair failed and we were unable to recover it. 00:30:38.353 [2024-04-17 06:56:42.860365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.860517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.860552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.353 qpair failed and we were unable to recover it. 00:30:38.353 [2024-04-17 06:56:42.860799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.861000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.861029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.353 qpair failed and we were unable to recover it. 00:30:38.353 [2024-04-17 06:56:42.861210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.861384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.861411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.353 qpair failed and we were unable to recover it. 00:30:38.353 [2024-04-17 06:56:42.861550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.861688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.861716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.353 qpair failed and we were unable to recover it. 00:30:38.353 [2024-04-17 06:56:42.861887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.862011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.862063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.353 qpair failed and we were unable to recover it. 00:30:38.353 [2024-04-17 06:56:42.862202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.862364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.862392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.353 qpair failed and we were unable to recover it. 00:30:38.353 [2024-04-17 06:56:42.862529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.862703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.862731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.353 qpair failed and we were unable to recover it. 00:30:38.353 [2024-04-17 06:56:42.862904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.863069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.863097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.353 qpair failed and we were unable to recover it. 00:30:38.353 [2024-04-17 06:56:42.863279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.863450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.863488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.353 qpair failed and we were unable to recover it. 00:30:38.353 [2024-04-17 06:56:42.863677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.863962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.864014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.353 qpair failed and we were unable to recover it. 00:30:38.353 [2024-04-17 06:56:42.864211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.864384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.864412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.353 qpair failed and we were unable to recover it. 00:30:38.353 [2024-04-17 06:56:42.864565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.864734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.864762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.353 qpair failed and we were unable to recover it. 00:30:38.353 [2024-04-17 06:56:42.864941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.865092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.865134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.353 qpair failed and we were unable to recover it. 00:30:38.353 [2024-04-17 06:56:42.865328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.865455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.865484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.353 qpair failed and we were unable to recover it. 00:30:38.353 [2024-04-17 06:56:42.865659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.865822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.865850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.353 qpair failed and we were unable to recover it. 00:30:38.353 [2024-04-17 06:56:42.866020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.866226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.866252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.353 qpair failed and we were unable to recover it. 00:30:38.353 [2024-04-17 06:56:42.866376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.866524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.866550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.353 qpair failed and we were unable to recover it. 00:30:38.353 [2024-04-17 06:56:42.866749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.866926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.866952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.353 qpair failed and we were unable to recover it. 00:30:38.353 [2024-04-17 06:56:42.867106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.867259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.867285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.353 qpair failed and we were unable to recover it. 00:30:38.353 [2024-04-17 06:56:42.867431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.867571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.867596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.353 qpair failed and we were unable to recover it. 00:30:38.353 [2024-04-17 06:56:42.867809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.867982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.868010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.353 qpair failed and we were unable to recover it. 00:30:38.353 [2024-04-17 06:56:42.868154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.868308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.868333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.353 qpair failed and we were unable to recover it. 00:30:38.353 [2024-04-17 06:56:42.868462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.868638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.868662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.353 qpair failed and we were unable to recover it. 00:30:38.353 [2024-04-17 06:56:42.868883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.869004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.869029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.353 qpair failed and we were unable to recover it. 00:30:38.353 [2024-04-17 06:56:42.869185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.869321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.869346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.353 qpair failed and we were unable to recover it. 00:30:38.353 [2024-04-17 06:56:42.869474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.869628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.869653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.353 qpair failed and we were unable to recover it. 00:30:38.353 [2024-04-17 06:56:42.869829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.869966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.869994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.353 qpair failed and we were unable to recover it. 00:30:38.353 [2024-04-17 06:56:42.870183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.870374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.353 [2024-04-17 06:56:42.870403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.353 qpair failed and we were unable to recover it. 00:30:38.354 [2024-04-17 06:56:42.870561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.870680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.870705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.354 qpair failed and we were unable to recover it. 00:30:38.354 [2024-04-17 06:56:42.870884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.871023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.871050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.354 qpair failed and we were unable to recover it. 00:30:38.354 [2024-04-17 06:56:42.871239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.871391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.871416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.354 qpair failed and we were unable to recover it. 00:30:38.354 [2024-04-17 06:56:42.871549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.871680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.871705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.354 qpair failed and we were unable to recover it. 00:30:38.354 [2024-04-17 06:56:42.871862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.872036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.872061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.354 qpair failed and we were unable to recover it. 00:30:38.354 [2024-04-17 06:56:42.872247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.872397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.872422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.354 qpair failed and we were unable to recover it. 00:30:38.354 [2024-04-17 06:56:42.872572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.872701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.872727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.354 qpair failed and we were unable to recover it. 00:30:38.354 [2024-04-17 06:56:42.872896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.873089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.873113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.354 qpair failed and we were unable to recover it. 00:30:38.354 [2024-04-17 06:56:42.873274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.873435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.873476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.354 qpair failed and we were unable to recover it. 00:30:38.354 [2024-04-17 06:56:42.873663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.873787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.873816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.354 qpair failed and we were unable to recover it. 00:30:38.354 [2024-04-17 06:56:42.873942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.874119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.874148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.354 qpair failed and we were unable to recover it. 00:30:38.354 [2024-04-17 06:56:42.874329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.874514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.874579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.354 qpair failed and we were unable to recover it. 00:30:38.354 [2024-04-17 06:56:42.874785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.874960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.874990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.354 qpair failed and we were unable to recover it. 00:30:38.354 [2024-04-17 06:56:42.875158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.875373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.875400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.354 qpair failed and we were unable to recover it. 00:30:38.354 [2024-04-17 06:56:42.875576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.875698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.875723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.354 qpair failed and we were unable to recover it. 00:30:38.354 [2024-04-17 06:56:42.875867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.876035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.876063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.354 qpair failed and we were unable to recover it. 00:30:38.354 [2024-04-17 06:56:42.876233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.876388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.876431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.354 qpair failed and we were unable to recover it. 00:30:38.354 [2024-04-17 06:56:42.876574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.876738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.876765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.354 qpair failed and we were unable to recover it. 00:30:38.354 [2024-04-17 06:56:42.876949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.877124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.877151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.354 qpair failed and we were unable to recover it. 00:30:38.354 [2024-04-17 06:56:42.877337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.877484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.877513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.354 qpair failed and we were unable to recover it. 00:30:38.354 [2024-04-17 06:56:42.877724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.877888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.877916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.354 qpair failed and we were unable to recover it. 00:30:38.354 [2024-04-17 06:56:42.878053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.878253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.878281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.354 qpair failed and we were unable to recover it. 00:30:38.354 [2024-04-17 06:56:42.878452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.878640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.878668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.354 qpair failed and we were unable to recover it. 00:30:38.354 [2024-04-17 06:56:42.878829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.878999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.879027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.354 qpair failed and we were unable to recover it. 00:30:38.354 [2024-04-17 06:56:42.879183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.879363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.879405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.354 qpair failed and we were unable to recover it. 00:30:38.354 [2024-04-17 06:56:42.879594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.879795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.879822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.354 qpair failed and we were unable to recover it. 00:30:38.354 [2024-04-17 06:56:42.879988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.880156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.880200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.354 qpair failed and we were unable to recover it. 00:30:38.354 [2024-04-17 06:56:42.880360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.880542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.880602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.354 qpair failed and we were unable to recover it. 00:30:38.354 [2024-04-17 06:56:42.880810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.881028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.354 [2024-04-17 06:56:42.881058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.354 qpair failed and we were unable to recover it. 00:30:38.355 [2024-04-17 06:56:42.881249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.881437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.881476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.355 qpair failed and we were unable to recover it. 00:30:38.355 [2024-04-17 06:56:42.881630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.881807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.881834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.355 qpair failed and we were unable to recover it. 00:30:38.355 [2024-04-17 06:56:42.881997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.882187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.882212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.355 qpair failed and we were unable to recover it. 00:30:38.355 [2024-04-17 06:56:42.882368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.882555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.882602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.355 qpair failed and we were unable to recover it. 00:30:38.355 [2024-04-17 06:56:42.882788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.882962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.882987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.355 qpair failed and we were unable to recover it. 00:30:38.355 [2024-04-17 06:56:42.883193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.883359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.883387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.355 qpair failed and we were unable to recover it. 00:30:38.355 [2024-04-17 06:56:42.883521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.883686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.883714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.355 qpair failed and we were unable to recover it. 00:30:38.355 [2024-04-17 06:56:42.883883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.884082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.884110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.355 qpair failed and we were unable to recover it. 00:30:38.355 [2024-04-17 06:56:42.884291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.884498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.884559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.355 qpair failed and we were unable to recover it. 00:30:38.355 [2024-04-17 06:56:42.884767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.884918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.884943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.355 qpair failed and we were unable to recover it. 00:30:38.355 [2024-04-17 06:56:42.885127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.885282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.885311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.355 qpair failed and we were unable to recover it. 00:30:38.355 [2024-04-17 06:56:42.885474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.885635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.885660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.355 qpair failed and we were unable to recover it. 00:30:38.355 [2024-04-17 06:56:42.885831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.885986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.886011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.355 qpair failed and we were unable to recover it. 00:30:38.355 [2024-04-17 06:56:42.886199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.886331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.886356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.355 qpair failed and we were unable to recover it. 00:30:38.355 [2024-04-17 06:56:42.886549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.886695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.886720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.355 qpair failed and we were unable to recover it. 00:30:38.355 [2024-04-17 06:56:42.886849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.887021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.887049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.355 qpair failed and we were unable to recover it. 00:30:38.355 [2024-04-17 06:56:42.887216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.887373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.887399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.355 qpair failed and we were unable to recover it. 00:30:38.355 [2024-04-17 06:56:42.887524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.887677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.887702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.355 qpair failed and we were unable to recover it. 00:30:38.355 [2024-04-17 06:56:42.887916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.888065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.888089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.355 qpair failed and we were unable to recover it. 00:30:38.355 [2024-04-17 06:56:42.888280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.888431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.888485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.355 qpair failed and we were unable to recover it. 00:30:38.355 [2024-04-17 06:56:42.888784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.888986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.889014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.355 qpair failed and we were unable to recover it. 00:30:38.355 [2024-04-17 06:56:42.889200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.889352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.889382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.355 qpair failed and we were unable to recover it. 00:30:38.355 [2024-04-17 06:56:42.889566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.889705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.889734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.355 qpair failed and we were unable to recover it. 00:30:38.355 [2024-04-17 06:56:42.889904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.890056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.890099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.355 qpair failed and we were unable to recover it. 00:30:38.355 [2024-04-17 06:56:42.890279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.890406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.890431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.355 qpair failed and we were unable to recover it. 00:30:38.355 [2024-04-17 06:56:42.890565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.890721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.890749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.355 qpair failed and we were unable to recover it. 00:30:38.355 [2024-04-17 06:56:42.890960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.891086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.891110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.355 qpair failed and we were unable to recover it. 00:30:38.355 [2024-04-17 06:56:42.891279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.891414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.891439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.355 qpair failed and we were unable to recover it. 00:30:38.355 [2024-04-17 06:56:42.891597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.891745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.355 [2024-04-17 06:56:42.891770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.355 qpair failed and we were unable to recover it. 00:30:38.356 [2024-04-17 06:56:42.891941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.892118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.892142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.356 qpair failed and we were unable to recover it. 00:30:38.356 [2024-04-17 06:56:42.892320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.892446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.892478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.356 qpair failed and we were unable to recover it. 00:30:38.356 [2024-04-17 06:56:42.892607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.892765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.892794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.356 qpair failed and we were unable to recover it. 00:30:38.356 [2024-04-17 06:56:42.892979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.893155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.893198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.356 qpair failed and we were unable to recover it. 00:30:38.356 [2024-04-17 06:56:42.893377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.893499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.893523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.356 qpair failed and we were unable to recover it. 00:30:38.356 [2024-04-17 06:56:42.893656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.893814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.893839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.356 qpair failed and we were unable to recover it. 00:30:38.356 [2024-04-17 06:56:42.894023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.894182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.894223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.356 qpair failed and we were unable to recover it. 00:30:38.356 [2024-04-17 06:56:42.894436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.894595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.894620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.356 qpair failed and we were unable to recover it. 00:30:38.356 [2024-04-17 06:56:42.894748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.894937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.894962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.356 qpair failed and we were unable to recover it. 00:30:38.356 [2024-04-17 06:56:42.895089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.895271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.895297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.356 qpair failed and we were unable to recover it. 00:30:38.356 [2024-04-17 06:56:42.895448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.895607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.895631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.356 qpair failed and we were unable to recover it. 00:30:38.356 [2024-04-17 06:56:42.895787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.895969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.895996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.356 qpair failed and we were unable to recover it. 00:30:38.356 [2024-04-17 06:56:42.896180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.896322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.896346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.356 qpair failed and we were unable to recover it. 00:30:38.356 [2024-04-17 06:56:42.896505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.896652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.896678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.356 qpair failed and we were unable to recover it. 00:30:38.356 [2024-04-17 06:56:42.896835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.896960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.896986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.356 qpair failed and we were unable to recover it. 00:30:38.356 [2024-04-17 06:56:42.897133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.897276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.897305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.356 qpair failed and we were unable to recover it. 00:30:38.356 [2024-04-17 06:56:42.897440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.897612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.897639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.356 qpair failed and we were unable to recover it. 00:30:38.356 [2024-04-17 06:56:42.897820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.897978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.898003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.356 qpair failed and we were unable to recover it. 00:30:38.356 [2024-04-17 06:56:42.898126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.898297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.898322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.356 qpair failed and we were unable to recover it. 00:30:38.356 [2024-04-17 06:56:42.898552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.898859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.898911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.356 qpair failed and we were unable to recover it. 00:30:38.356 [2024-04-17 06:56:42.899126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.899274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.899300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.356 qpair failed and we were unable to recover it. 00:30:38.356 [2024-04-17 06:56:42.899503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.899651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.899679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.356 qpair failed and we were unable to recover it. 00:30:38.356 [2024-04-17 06:56:42.899858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.899987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.900014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.356 qpair failed and we were unable to recover it. 00:30:38.356 [2024-04-17 06:56:42.900174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.900367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.900395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.356 qpair failed and we were unable to recover it. 00:30:38.356 [2024-04-17 06:56:42.900570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.356 [2024-04-17 06:56:42.900751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.900776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.357 qpair failed and we were unable to recover it. 00:30:38.357 [2024-04-17 06:56:42.900924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.901051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.901076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.357 qpair failed and we were unable to recover it. 00:30:38.357 [2024-04-17 06:56:42.901305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.901439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.901463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.357 qpair failed and we were unable to recover it. 00:30:38.357 [2024-04-17 06:56:42.901617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.901767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.901794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.357 qpair failed and we were unable to recover it. 00:30:38.357 [2024-04-17 06:56:42.901995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.902190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.902218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.357 qpair failed and we were unable to recover it. 00:30:38.357 [2024-04-17 06:56:42.902406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.902600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.902642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.357 qpair failed and we were unable to recover it. 00:30:38.357 [2024-04-17 06:56:42.902841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.903020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.903063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.357 qpair failed and we were unable to recover it. 00:30:38.357 [2024-04-17 06:56:42.903282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.903407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.903432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.357 qpair failed and we were unable to recover it. 00:30:38.357 [2024-04-17 06:56:42.903656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.903808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.903849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.357 qpair failed and we were unable to recover it. 00:30:38.357 [2024-04-17 06:56:42.904038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.904184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.904209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.357 qpair failed and we were unable to recover it. 00:30:38.357 [2024-04-17 06:56:42.904392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.904544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.904569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.357 qpair failed and we were unable to recover it. 00:30:38.357 [2024-04-17 06:56:42.904722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.904926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.904954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.357 qpair failed and we were unable to recover it. 00:30:38.357 [2024-04-17 06:56:42.905108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.905275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.905301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.357 qpair failed and we were unable to recover it. 00:30:38.357 [2024-04-17 06:56:42.905472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.905623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.905648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.357 qpair failed and we were unable to recover it. 00:30:38.357 [2024-04-17 06:56:42.905871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.906031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.906071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.357 qpair failed and we were unable to recover it. 00:30:38.357 [2024-04-17 06:56:42.906223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.906388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.906416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.357 qpair failed and we were unable to recover it. 00:30:38.357 [2024-04-17 06:56:42.906591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.906752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.906780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.357 qpair failed and we were unable to recover it. 00:30:38.357 [2024-04-17 06:56:42.906960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.907157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.907191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.357 qpair failed and we were unable to recover it. 00:30:38.357 [2024-04-17 06:56:42.907326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.907453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.907477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.357 qpair failed and we were unable to recover it. 00:30:38.357 [2024-04-17 06:56:42.907637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.907816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.907844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.357 qpair failed and we were unable to recover it. 00:30:38.357 [2024-04-17 06:56:42.908041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.908184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.908212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.357 qpair failed and we were unable to recover it. 00:30:38.357 [2024-04-17 06:56:42.908351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.908484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.908511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.357 qpair failed and we were unable to recover it. 00:30:38.357 [2024-04-17 06:56:42.908650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.908830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.908872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.357 qpair failed and we were unable to recover it. 00:30:38.357 [2024-04-17 06:56:42.909014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.909153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.909206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.357 qpair failed and we were unable to recover it. 00:30:38.357 [2024-04-17 06:56:42.909359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.909494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.909522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.357 qpair failed and we were unable to recover it. 00:30:38.357 [2024-04-17 06:56:42.909704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.909850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.909878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.357 qpair failed and we were unable to recover it. 00:30:38.357 [2024-04-17 06:56:42.910049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.910184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.910226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.357 qpair failed and we were unable to recover it. 00:30:38.357 [2024-04-17 06:56:42.910358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.910568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.910618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.357 qpair failed and we were unable to recover it. 00:30:38.357 [2024-04-17 06:56:42.910768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.910948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.910974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.357 qpair failed and we were unable to recover it. 00:30:38.357 [2024-04-17 06:56:42.911093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.357 [2024-04-17 06:56:42.911271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.911305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.358 qpair failed and we were unable to recover it. 00:30:38.358 [2024-04-17 06:56:42.911459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.911619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.911666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.358 qpair failed and we were unable to recover it. 00:30:38.358 [2024-04-17 06:56:42.911808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.911988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.912016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.358 qpair failed and we were unable to recover it. 00:30:38.358 [2024-04-17 06:56:42.912229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.912413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.912438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.358 qpair failed and we were unable to recover it. 00:30:38.358 [2024-04-17 06:56:42.912580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.912733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.912761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.358 qpair failed and we were unable to recover it. 00:30:38.358 [2024-04-17 06:56:42.912930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.913088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.913130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.358 qpair failed and we were unable to recover it. 00:30:38.358 [2024-04-17 06:56:42.913278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.913418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.913446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.358 qpair failed and we were unable to recover it. 00:30:38.358 [2024-04-17 06:56:42.913614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.913807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.913853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.358 qpair failed and we were unable to recover it. 00:30:38.358 [2024-04-17 06:56:42.914025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.914197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.914226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.358 qpair failed and we were unable to recover it. 00:30:38.358 [2024-04-17 06:56:42.914434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.914607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.914652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.358 qpair failed and we were unable to recover it. 00:30:38.358 [2024-04-17 06:56:42.914826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.915033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.915061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.358 qpair failed and we were unable to recover it. 00:30:38.358 [2024-04-17 06:56:42.915261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.915398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.915426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.358 qpair failed and we were unable to recover it. 00:30:38.358 [2024-04-17 06:56:42.915564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.915714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.915739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.358 qpair failed and we were unable to recover it. 00:30:38.358 [2024-04-17 06:56:42.915899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.916075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.916103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.358 qpair failed and we were unable to recover it. 00:30:38.358 [2024-04-17 06:56:42.916280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.916426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.916464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.358 qpair failed and we were unable to recover it. 00:30:38.358 [2024-04-17 06:56:42.916637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.916829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.916879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.358 qpair failed and we were unable to recover it. 00:30:38.358 [2024-04-17 06:56:42.917082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.917264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.917293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.358 qpair failed and we were unable to recover it. 00:30:38.358 [2024-04-17 06:56:42.917440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.917606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.917632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.358 qpair failed and we were unable to recover it. 00:30:38.358 [2024-04-17 06:56:42.917765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.917968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.918006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.358 qpair failed and we were unable to recover it. 00:30:38.358 [2024-04-17 06:56:42.918171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.918333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.918358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.358 qpair failed and we were unable to recover it. 00:30:38.358 [2024-04-17 06:56:42.918517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.918739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.918798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.358 qpair failed and we were unable to recover it. 00:30:38.358 [2024-04-17 06:56:42.918978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.919163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.919216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.358 qpair failed and we were unable to recover it. 00:30:38.358 [2024-04-17 06:56:42.919436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.919687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.919734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.358 qpair failed and we were unable to recover it. 00:30:38.358 [2024-04-17 06:56:42.919910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.920058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.920086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.358 qpair failed and we were unable to recover it. 00:30:38.358 [2024-04-17 06:56:42.920296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.920441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.920473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.358 qpair failed and we were unable to recover it. 00:30:38.358 [2024-04-17 06:56:42.920605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.920765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.920793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.358 qpair failed and we were unable to recover it. 00:30:38.358 [2024-04-17 06:56:42.920941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.921084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.921111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.358 qpair failed and we were unable to recover it. 00:30:38.358 [2024-04-17 06:56:42.921302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.921466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.921496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.358 qpair failed and we were unable to recover it. 00:30:38.358 [2024-04-17 06:56:42.921667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.921843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.921868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.358 qpair failed and we were unable to recover it. 00:30:38.358 [2024-04-17 06:56:42.922021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.358 [2024-04-17 06:56:42.922154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.359 [2024-04-17 06:56:42.922192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.359 qpair failed and we were unable to recover it. 00:30:38.359 [2024-04-17 06:56:42.922379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.359 [2024-04-17 06:56:42.922511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.359 [2024-04-17 06:56:42.922536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.359 qpair failed and we were unable to recover it. 00:30:38.359 [2024-04-17 06:56:42.922702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.359 [2024-04-17 06:56:42.922945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.359 [2024-04-17 06:56:42.923004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.359 qpair failed and we were unable to recover it. 00:30:38.359 [2024-04-17 06:56:42.923222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.359 [2024-04-17 06:56:42.923381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.359 [2024-04-17 06:56:42.923407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.359 qpair failed and we were unable to recover it. 00:30:38.634 [2024-04-17 06:56:42.923612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.923771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.923813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.634 qpair failed and we were unable to recover it. 00:30:38.634 [2024-04-17 06:56:42.923989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.924184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.924222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.634 qpair failed and we were unable to recover it. 00:30:38.634 [2024-04-17 06:56:42.924423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.924618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.924652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.634 qpair failed and we were unable to recover it. 00:30:38.634 [2024-04-17 06:56:42.924850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.924994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.925028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.634 qpair failed and we were unable to recover it. 00:30:38.634 [2024-04-17 06:56:42.925208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.925401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.925435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.634 qpair failed and we were unable to recover it. 00:30:38.634 [2024-04-17 06:56:42.925661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.925861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.925892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.634 qpair failed and we were unable to recover it. 00:30:38.634 [2024-04-17 06:56:42.926050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.926189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.926215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.634 qpair failed and we were unable to recover it. 00:30:38.634 [2024-04-17 06:56:42.926372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.926552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.926578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.634 qpair failed and we were unable to recover it. 00:30:38.634 [2024-04-17 06:56:42.926762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.926916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.926941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.634 qpair failed and we were unable to recover it. 00:30:38.634 [2024-04-17 06:56:42.927100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.927223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.927249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.634 qpair failed and we were unable to recover it. 00:30:38.634 [2024-04-17 06:56:42.927403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.927571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.927596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.634 qpair failed and we were unable to recover it. 00:30:38.634 [2024-04-17 06:56:42.927753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.927957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.927985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.634 qpair failed and we were unable to recover it. 00:30:38.634 [2024-04-17 06:56:42.928165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.928372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.928401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.634 qpair failed and we were unable to recover it. 00:30:38.634 [2024-04-17 06:56:42.928644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.928857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.928908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.634 qpair failed and we were unable to recover it. 00:30:38.634 [2024-04-17 06:56:42.929109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.929303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.929331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.634 qpair failed and we were unable to recover it. 00:30:38.634 [2024-04-17 06:56:42.929502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.929662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.634 [2024-04-17 06:56:42.929690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.634 qpair failed and we were unable to recover it. 00:30:38.634 [2024-04-17 06:56:42.929906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.930087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.930115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.635 qpair failed and we were unable to recover it. 00:30:38.635 [2024-04-17 06:56:42.930314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.930455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.930483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.635 qpair failed and we were unable to recover it. 00:30:38.635 [2024-04-17 06:56:42.930631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.930769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.930802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.635 qpair failed and we were unable to recover it. 00:30:38.635 [2024-04-17 06:56:42.930971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.931139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.931167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.635 qpair failed and we were unable to recover it. 00:30:38.635 [2024-04-17 06:56:42.931359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.931521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.931547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.635 qpair failed and we were unable to recover it. 00:30:38.635 [2024-04-17 06:56:42.931734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.931888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.931917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.635 qpair failed and we were unable to recover it. 00:30:38.635 [2024-04-17 06:56:42.932087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.932229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.932260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.635 qpair failed and we were unable to recover it. 00:30:38.635 [2024-04-17 06:56:42.932408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.932609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.932655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.635 qpair failed and we were unable to recover it. 00:30:38.635 [2024-04-17 06:56:42.932853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.933024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.933051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.635 qpair failed and we were unable to recover it. 00:30:38.635 [2024-04-17 06:56:42.933214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.933385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.933415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.635 qpair failed and we were unable to recover it. 00:30:38.635 [2024-04-17 06:56:42.933593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.933731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.933759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.635 qpair failed and we were unable to recover it. 00:30:38.635 [2024-04-17 06:56:42.933930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.934090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.934118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.635 qpair failed and we were unable to recover it. 00:30:38.635 [2024-04-17 06:56:42.934313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.934508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.934536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.635 qpair failed and we were unable to recover it. 00:30:38.635 [2024-04-17 06:56:42.934706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.934891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.934919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.635 qpair failed and we were unable to recover it. 00:30:38.635 [2024-04-17 06:56:42.935095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.935235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.935264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.635 qpair failed and we were unable to recover it. 00:30:38.635 [2024-04-17 06:56:42.935474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.935626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.935666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.635 qpair failed and we were unable to recover it. 00:30:38.635 [2024-04-17 06:56:42.935843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.936051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.936078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.635 qpair failed and we were unable to recover it. 00:30:38.635 [2024-04-17 06:56:42.936245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.936396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.936424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.635 qpair failed and we were unable to recover it. 00:30:38.635 [2024-04-17 06:56:42.936632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.936820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.936846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.635 qpair failed and we were unable to recover it. 00:30:38.635 [2024-04-17 06:56:42.937032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.937209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.937238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.635 qpair failed and we were unable to recover it. 00:30:38.635 [2024-04-17 06:56:42.937410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.937564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.937607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.635 qpair failed and we were unable to recover it. 00:30:38.635 [2024-04-17 06:56:42.937768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.937967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.937992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.635 qpair failed and we were unable to recover it. 00:30:38.635 [2024-04-17 06:56:42.938203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.938350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.938378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.635 qpair failed and we were unable to recover it. 00:30:38.635 [2024-04-17 06:56:42.938555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.938737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.938762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.635 qpair failed and we were unable to recover it. 00:30:38.635 [2024-04-17 06:56:42.938920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.939068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.939093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.635 qpair failed and we were unable to recover it. 00:30:38.635 [2024-04-17 06:56:42.939263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.939460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.939488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.635 qpair failed and we were unable to recover it. 00:30:38.635 [2024-04-17 06:56:42.939661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.939823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.939850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.635 qpair failed and we were unable to recover it. 00:30:38.635 [2024-04-17 06:56:42.940026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.940172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.940207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.635 qpair failed and we were unable to recover it. 00:30:38.635 [2024-04-17 06:56:42.940384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.940532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.635 [2024-04-17 06:56:42.940558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.635 qpair failed and we were unable to recover it. 00:30:38.636 [2024-04-17 06:56:42.940677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.940801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.940826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.636 qpair failed and we were unable to recover it. 00:30:38.636 [2024-04-17 06:56:42.940954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.941142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.941185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.636 qpair failed and we were unable to recover it. 00:30:38.636 [2024-04-17 06:56:42.941375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.941569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.941594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.636 qpair failed and we were unable to recover it. 00:30:38.636 [2024-04-17 06:56:42.941748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.941927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.941951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.636 qpair failed and we were unable to recover it. 00:30:38.636 [2024-04-17 06:56:42.942130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.942311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.942339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.636 qpair failed and we were unable to recover it. 00:30:38.636 [2024-04-17 06:56:42.942479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.942646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.942673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.636 qpair failed and we were unable to recover it. 00:30:38.636 [2024-04-17 06:56:42.942845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.942976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.943003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.636 qpair failed and we were unable to recover it. 00:30:38.636 [2024-04-17 06:56:42.943150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.943310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.943351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.636 qpair failed and we were unable to recover it. 00:30:38.636 [2024-04-17 06:56:42.943491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.943778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.943831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.636 qpair failed and we were unable to recover it. 00:30:38.636 [2024-04-17 06:56:42.944034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.944216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.944242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.636 qpair failed and we were unable to recover it. 00:30:38.636 [2024-04-17 06:56:42.944417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.944602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.944646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.636 qpair failed and we were unable to recover it. 00:30:38.636 [2024-04-17 06:56:42.944808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.944935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.944960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.636 qpair failed and we were unable to recover it. 00:30:38.636 [2024-04-17 06:56:42.945146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.945366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.945394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.636 qpair failed and we were unable to recover it. 00:30:38.636 [2024-04-17 06:56:42.945564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.945743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.945775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.636 qpair failed and we were unable to recover it. 00:30:38.636 [2024-04-17 06:56:42.945961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.946139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.946164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.636 qpair failed and we were unable to recover it. 00:30:38.636 [2024-04-17 06:56:42.946367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.946523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.946548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.636 qpair failed and we were unable to recover it. 00:30:38.636 [2024-04-17 06:56:42.946703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.946835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.946862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.636 qpair failed and we were unable to recover it. 00:30:38.636 [2024-04-17 06:56:42.947053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.947182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.947208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.636 qpair failed and we were unable to recover it. 00:30:38.636 [2024-04-17 06:56:42.947393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.947555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.947600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.636 qpair failed and we were unable to recover it. 00:30:38.636 [2024-04-17 06:56:42.947780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.947985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.948031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.636 qpair failed and we were unable to recover it. 00:30:38.636 [2024-04-17 06:56:42.948248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.948380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.948408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.636 qpair failed and we were unable to recover it. 00:30:38.636 [2024-04-17 06:56:42.948565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.948707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.948734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.636 qpair failed and we were unable to recover it. 00:30:38.636 [2024-04-17 06:56:42.948863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.948990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.949017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.636 qpair failed and we were unable to recover it. 00:30:38.636 [2024-04-17 06:56:42.949191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.949372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.949398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.636 qpair failed and we were unable to recover it. 00:30:38.636 [2024-04-17 06:56:42.949529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.949715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.949748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.636 qpair failed and we were unable to recover it. 00:30:38.636 [2024-04-17 06:56:42.949912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.950085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.950113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.636 qpair failed and we were unable to recover it. 00:30:38.636 [2024-04-17 06:56:42.950319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.950508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.950550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.636 qpair failed and we were unable to recover it. 00:30:38.636 [2024-04-17 06:56:42.950749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.950916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.950963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.636 qpair failed and we were unable to recover it. 00:30:38.636 [2024-04-17 06:56:42.951164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.951343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.636 [2024-04-17 06:56:42.951369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.637 qpair failed and we were unable to recover it. 00:30:38.637 [2024-04-17 06:56:42.951582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.951716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.951743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.637 qpair failed and we were unable to recover it. 00:30:38.637 [2024-04-17 06:56:42.951942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.952109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.952137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.637 qpair failed and we were unable to recover it. 00:30:38.637 [2024-04-17 06:56:42.952330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.952502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.952530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.637 qpair failed and we were unable to recover it. 00:30:38.637 [2024-04-17 06:56:42.952684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.952901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.952934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.637 qpair failed and we were unable to recover it. 00:30:38.637 [2024-04-17 06:56:42.953102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.953272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.953301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.637 qpair failed and we were unable to recover it. 00:30:38.637 [2024-04-17 06:56:42.953469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.953631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.953663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.637 qpair failed and we were unable to recover it. 00:30:38.637 [2024-04-17 06:56:42.953844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.954012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.954039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.637 qpair failed and we were unable to recover it. 00:30:38.637 [2024-04-17 06:56:42.954209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.954374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.954402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.637 qpair failed and we were unable to recover it. 00:30:38.637 [2024-04-17 06:56:42.954574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.954831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.954882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.637 qpair failed and we were unable to recover it. 00:30:38.637 [2024-04-17 06:56:42.955083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.955252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.955280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.637 qpair failed and we were unable to recover it. 00:30:38.637 [2024-04-17 06:56:42.955457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.955630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.955658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.637 qpair failed and we were unable to recover it. 00:30:38.637 [2024-04-17 06:56:42.955824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.956036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.956061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.637 qpair failed and we were unable to recover it. 00:30:38.637 [2024-04-17 06:56:42.956184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.956366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.956394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.637 qpair failed and we were unable to recover it. 00:30:38.637 [2024-04-17 06:56:42.956565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.956773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.956797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.637 qpair failed and we were unable to recover it. 00:30:38.637 [2024-04-17 06:56:42.956979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.957147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.957181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.637 qpair failed and we were unable to recover it. 00:30:38.637 [2024-04-17 06:56:42.957359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.957557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.957585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.637 qpair failed and we were unable to recover it. 00:30:38.637 [2024-04-17 06:56:42.957756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.957890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.957919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.637 qpair failed and we were unable to recover it. 00:30:38.637 [2024-04-17 06:56:42.958119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.958323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.958352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.637 qpair failed and we were unable to recover it. 00:30:38.637 [2024-04-17 06:56:42.958531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.958706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.958734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.637 qpair failed and we were unable to recover it. 00:30:38.637 [2024-04-17 06:56:42.958897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.959076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.959102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.637 qpair failed and we were unable to recover it. 00:30:38.637 [2024-04-17 06:56:42.959270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.959421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.959447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.637 qpair failed and we were unable to recover it. 00:30:38.637 [2024-04-17 06:56:42.959565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.959721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.959745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.637 qpair failed and we were unable to recover it. 00:30:38.637 [2024-04-17 06:56:42.959898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.960048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.960089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.637 qpair failed and we were unable to recover it. 00:30:38.637 [2024-04-17 06:56:42.960290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.960469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.960494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.637 qpair failed and we were unable to recover it. 00:30:38.637 [2024-04-17 06:56:42.960615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.960792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.960818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.637 qpair failed and we were unable to recover it. 00:30:38.637 [2024-04-17 06:56:42.960945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.961093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.961119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.637 qpair failed and we were unable to recover it. 00:30:38.637 [2024-04-17 06:56:42.961289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.961448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.961493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.637 qpair failed and we were unable to recover it. 00:30:38.637 [2024-04-17 06:56:42.961671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.961818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.961843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.637 qpair failed and we were unable to recover it. 00:30:38.637 [2024-04-17 06:56:42.962042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.962189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.637 [2024-04-17 06:56:42.962218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.638 qpair failed and we were unable to recover it. 00:30:38.638 [2024-04-17 06:56:42.962421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.962571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.962596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.638 qpair failed and we were unable to recover it. 00:30:38.638 [2024-04-17 06:56:42.962797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.962994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.963021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.638 qpair failed and we were unable to recover it. 00:30:38.638 [2024-04-17 06:56:42.963204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.963377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.963405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.638 qpair failed and we were unable to recover it. 00:30:38.638 [2024-04-17 06:56:42.963617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.963742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.963767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.638 qpair failed and we were unable to recover it. 00:30:38.638 [2024-04-17 06:56:42.963926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.964075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.964100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.638 qpair failed and we were unable to recover it. 00:30:38.638 [2024-04-17 06:56:42.964266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.964444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.964488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.638 qpair failed and we were unable to recover it. 00:30:38.638 [2024-04-17 06:56:42.964738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.964899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.964926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.638 qpair failed and we were unable to recover it. 00:30:38.638 [2024-04-17 06:56:42.965083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.965292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.965321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.638 qpair failed and we were unable to recover it. 00:30:38.638 [2024-04-17 06:56:42.965523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.965652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.965695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.638 qpair failed and we were unable to recover it. 00:30:38.638 [2024-04-17 06:56:42.965849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.966004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.966029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.638 qpair failed and we were unable to recover it. 00:30:38.638 [2024-04-17 06:56:42.966209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.966408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.966433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.638 qpair failed and we were unable to recover it. 00:30:38.638 [2024-04-17 06:56:42.966581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.966739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.966779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.638 qpair failed and we were unable to recover it. 00:30:38.638 [2024-04-17 06:56:42.966951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.967122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.967150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.638 qpair failed and we were unable to recover it. 00:30:38.638 [2024-04-17 06:56:42.967349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.967477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.967502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.638 qpair failed and we were unable to recover it. 00:30:38.638 [2024-04-17 06:56:42.967655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.967835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.967863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.638 qpair failed and we were unable to recover it. 00:30:38.638 [2024-04-17 06:56:42.968072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.968276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.968302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.638 qpair failed and we were unable to recover it. 00:30:38.638 [2024-04-17 06:56:42.968431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.968629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.968668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.638 qpair failed and we were unable to recover it. 00:30:38.638 [2024-04-17 06:56:42.968873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.969076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.969107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.638 qpair failed and we were unable to recover it. 00:30:38.638 [2024-04-17 06:56:42.969293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.969434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.969468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.638 qpair failed and we were unable to recover it. 00:30:38.638 [2024-04-17 06:56:42.969673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.969883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.969911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.638 qpair failed and we were unable to recover it. 00:30:38.638 [2024-04-17 06:56:42.970064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.970234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.970263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.638 qpair failed and we were unable to recover it. 00:30:38.638 [2024-04-17 06:56:42.970412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.970571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.970596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.638 qpair failed and we were unable to recover it. 00:30:38.638 [2024-04-17 06:56:42.970781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.970954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.970981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.638 qpair failed and we were unable to recover it. 00:30:38.638 [2024-04-17 06:56:42.971149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.971310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.971339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.638 qpair failed and we were unable to recover it. 00:30:38.638 [2024-04-17 06:56:42.971512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.971685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.971715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.638 qpair failed and we were unable to recover it. 00:30:38.638 [2024-04-17 06:56:42.971860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.972015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.638 [2024-04-17 06:56:42.972042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.639 qpair failed and we were unable to recover it. 00:30:38.639 [2024-04-17 06:56:42.972192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.972358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.972386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.639 qpair failed and we were unable to recover it. 00:30:38.639 [2024-04-17 06:56:42.972524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.972661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.972694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.639 qpair failed and we were unable to recover it. 00:30:38.639 [2024-04-17 06:56:42.972857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.972998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.973026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.639 qpair failed and we were unable to recover it. 00:30:38.639 [2024-04-17 06:56:42.973170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.973331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.973372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.639 qpair failed and we were unable to recover it. 00:30:38.639 [2024-04-17 06:56:42.973564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.973701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.973729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.639 qpair failed and we were unable to recover it. 00:30:38.639 [2024-04-17 06:56:42.973869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.974038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.974066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.639 qpair failed and we were unable to recover it. 00:30:38.639 [2024-04-17 06:56:42.974219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.974386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.974414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.639 qpair failed and we were unable to recover it. 00:30:38.639 [2024-04-17 06:56:42.974578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.974703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.974728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.639 qpair failed and we were unable to recover it. 00:30:38.639 [2024-04-17 06:56:42.974905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.975103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.975131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.639 qpair failed and we were unable to recover it. 00:30:38.639 [2024-04-17 06:56:42.975279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.975421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.975449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.639 qpair failed and we were unable to recover it. 00:30:38.639 [2024-04-17 06:56:42.975617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.975785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.975813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.639 qpair failed and we were unable to recover it. 00:30:38.639 [2024-04-17 06:56:42.975987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.976191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.976221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.639 qpair failed and we were unable to recover it. 00:30:38.639 [2024-04-17 06:56:42.976377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.976609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.976637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.639 qpair failed and we were unable to recover it. 00:30:38.639 [2024-04-17 06:56:42.976801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.976973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.977002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.639 qpair failed and we were unable to recover it. 00:30:38.639 [2024-04-17 06:56:42.977182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.977336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.977363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.639 qpair failed and we were unable to recover it. 00:30:38.639 [2024-04-17 06:56:42.977566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.977694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.977718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.639 qpair failed and we were unable to recover it. 00:30:38.639 [2024-04-17 06:56:42.977880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.978060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.978088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.639 qpair failed and we were unable to recover it. 00:30:38.639 [2024-04-17 06:56:42.978246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.978415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.978444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.639 qpair failed and we were unable to recover it. 00:30:38.639 [2024-04-17 06:56:42.978595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.978764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.978793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.639 qpair failed and we were unable to recover it. 00:30:38.639 [2024-04-17 06:56:42.978939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.979095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.979120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.639 qpair failed and we were unable to recover it. 00:30:38.639 [2024-04-17 06:56:42.979284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.979428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.979457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.639 qpair failed and we were unable to recover it. 00:30:38.639 [2024-04-17 06:56:42.979626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.979800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.979827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.639 qpair failed and we were unable to recover it. 00:30:38.639 [2024-04-17 06:56:42.979980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.980148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.980184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.639 qpair failed and we were unable to recover it. 00:30:38.639 [2024-04-17 06:56:42.980328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.980531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.980559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.639 qpair failed and we were unable to recover it. 00:30:38.639 [2024-04-17 06:56:42.980837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.980979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.981006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.639 qpair failed and we were unable to recover it. 00:30:38.639 [2024-04-17 06:56:42.981172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.981325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.639 [2024-04-17 06:56:42.981352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.639 qpair failed and we were unable to recover it. 00:30:38.640 [2024-04-17 06:56:42.981511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.981694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.981720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.640 qpair failed and we were unable to recover it. 00:30:38.640 [2024-04-17 06:56:42.981902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.982097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.982124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.640 qpair failed and we were unable to recover it. 00:30:38.640 [2024-04-17 06:56:42.982335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.982453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.982478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.640 qpair failed and we were unable to recover it. 00:30:38.640 [2024-04-17 06:56:42.982625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.982787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.982828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.640 qpair failed and we were unable to recover it. 00:30:38.640 [2024-04-17 06:56:42.982978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.983142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.983191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.640 qpair failed and we were unable to recover it. 00:30:38.640 [2024-04-17 06:56:42.983376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.983499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.983524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.640 qpair failed and we were unable to recover it. 00:30:38.640 [2024-04-17 06:56:42.983688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.983839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.983864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.640 qpair failed and we were unable to recover it. 00:30:38.640 [2024-04-17 06:56:42.984081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.984261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.984287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.640 qpair failed and we were unable to recover it. 00:30:38.640 [2024-04-17 06:56:42.984484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.984670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.984710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.640 qpair failed and we were unable to recover it. 00:30:38.640 [2024-04-17 06:56:42.984878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.985059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.985100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.640 qpair failed and we were unable to recover it. 00:30:38.640 [2024-04-17 06:56:42.985286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.985445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.985492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.640 qpair failed and we were unable to recover it. 00:30:38.640 [2024-04-17 06:56:42.985691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.985844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.985874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.640 qpair failed and we were unable to recover it. 00:30:38.640 [2024-04-17 06:56:42.986052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.986199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.986225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.640 qpair failed and we were unable to recover it. 00:30:38.640 [2024-04-17 06:56:42.986361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.986512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.986536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.640 qpair failed and we were unable to recover it. 00:30:38.640 [2024-04-17 06:56:42.986680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.986824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.986851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.640 qpair failed and we were unable to recover it. 00:30:38.640 [2024-04-17 06:56:42.987019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.987203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.987228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.640 qpair failed and we were unable to recover it. 00:30:38.640 [2024-04-17 06:56:42.987379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.987556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.987584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.640 qpair failed and we were unable to recover it. 00:30:38.640 [2024-04-17 06:56:42.987751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.987947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.987974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.640 qpair failed and we were unable to recover it. 00:30:38.640 [2024-04-17 06:56:42.988135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.988311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.988339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.640 qpair failed and we were unable to recover it. 00:30:38.640 [2024-04-17 06:56:42.988492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.988627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.988655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.640 qpair failed and we were unable to recover it. 00:30:38.640 [2024-04-17 06:56:42.988802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.988964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.988989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.640 qpair failed and we were unable to recover it. 00:30:38.640 [2024-04-17 06:56:42.989146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.989316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.989342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.640 qpair failed and we were unable to recover it. 00:30:38.640 [2024-04-17 06:56:42.989497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.989657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.989683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.640 qpair failed and we were unable to recover it. 00:30:38.640 [2024-04-17 06:56:42.989841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.990063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.990088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.640 qpair failed and we were unable to recover it. 00:30:38.640 [2024-04-17 06:56:42.990210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.990343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.990369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.640 qpair failed and we were unable to recover it. 00:30:38.640 [2024-04-17 06:56:42.990493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.990673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.990698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.640 qpair failed and we were unable to recover it. 00:30:38.640 [2024-04-17 06:56:42.990904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.991094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.991126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.640 qpair failed and we were unable to recover it. 00:30:38.640 [2024-04-17 06:56:42.991337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.991510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.991538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.640 qpair failed and we were unable to recover it. 00:30:38.640 [2024-04-17 06:56:42.991701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.991886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.640 [2024-04-17 06:56:42.991911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.641 qpair failed and we were unable to recover it. 00:30:38.641 [2024-04-17 06:56:42.992066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.992243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.992272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.641 qpair failed and we were unable to recover it. 00:30:38.641 [2024-04-17 06:56:42.992451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.992635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.992676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.641 qpair failed and we were unable to recover it. 00:30:38.641 [2024-04-17 06:56:42.992812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.992984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.993011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.641 qpair failed and we were unable to recover it. 00:30:38.641 [2024-04-17 06:56:42.993193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.993364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.993392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.641 qpair failed and we were unable to recover it. 00:30:38.641 [2024-04-17 06:56:42.993581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.993784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.993811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.641 qpair failed and we were unable to recover it. 00:30:38.641 [2024-04-17 06:56:42.994010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.994173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.994207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.641 qpair failed and we were unable to recover it. 00:30:38.641 [2024-04-17 06:56:42.994369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.994516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.994545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.641 qpair failed and we were unable to recover it. 00:30:38.641 [2024-04-17 06:56:42.994748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.994946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.994974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.641 qpair failed and we were unable to recover it. 00:30:38.641 [2024-04-17 06:56:42.995157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.995318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.995360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.641 qpair failed and we were unable to recover it. 00:30:38.641 [2024-04-17 06:56:42.995626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.995950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.995991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.641 qpair failed and we were unable to recover it. 00:30:38.641 [2024-04-17 06:56:42.996158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.996335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.996361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.641 qpair failed and we were unable to recover it. 00:30:38.641 [2024-04-17 06:56:42.996498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.996665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.996692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.641 qpair failed and we were unable to recover it. 00:30:38.641 [2024-04-17 06:56:42.996843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.997014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.997038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.641 qpair failed and we were unable to recover it. 00:30:38.641 [2024-04-17 06:56:42.997205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.997395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.997423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.641 qpair failed and we were unable to recover it. 00:30:38.641 [2024-04-17 06:56:42.997562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.997694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.997722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.641 qpair failed and we were unable to recover it. 00:30:38.641 [2024-04-17 06:56:42.997895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.998067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.998095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.641 qpair failed and we were unable to recover it. 00:30:38.641 [2024-04-17 06:56:42.998275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.998446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.998474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.641 qpair failed and we were unable to recover it. 00:30:38.641 [2024-04-17 06:56:42.998702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.998855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.998880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.641 qpair failed and we were unable to recover it. 00:30:38.641 [2024-04-17 06:56:42.999018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.999198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.999226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.641 qpair failed and we were unable to recover it. 00:30:38.641 [2024-04-17 06:56:42.999393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.999596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:42.999643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.641 qpair failed and we were unable to recover it. 00:30:38.641 [2024-04-17 06:56:42.999837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:43.000001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:43.000028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.641 qpair failed and we were unable to recover it. 00:30:38.641 [2024-04-17 06:56:43.000289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:43.000463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:43.000509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.641 qpair failed and we were unable to recover it. 00:30:38.641 [2024-04-17 06:56:43.000662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:43.000860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:43.000889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.641 qpair failed and we were unable to recover it. 00:30:38.641 [2024-04-17 06:56:43.001100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:43.001251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:43.001279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.641 qpair failed and we were unable to recover it. 00:30:38.641 [2024-04-17 06:56:43.001502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:43.001677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:43.001705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.641 qpair failed and we were unable to recover it. 00:30:38.641 [2024-04-17 06:56:43.001882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:43.002053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:43.002081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.641 qpair failed and we were unable to recover it. 00:30:38.641 [2024-04-17 06:56:43.002263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:43.002441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:43.002470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.641 qpair failed and we were unable to recover it. 00:30:38.641 [2024-04-17 06:56:43.002607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:43.002806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.641 [2024-04-17 06:56:43.002833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.641 qpair failed and we were unable to recover it. 00:30:38.641 [2024-04-17 06:56:43.002989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.003163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.003195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.642 qpair failed and we were unable to recover it. 00:30:38.642 [2024-04-17 06:56:43.003357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.003572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.003597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.642 qpair failed and we were unable to recover it. 00:30:38.642 [2024-04-17 06:56:43.003781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.004014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.004058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.642 qpair failed and we were unable to recover it. 00:30:38.642 [2024-04-17 06:56:43.004216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.004387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.004415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.642 qpair failed and we were unable to recover it. 00:30:38.642 [2024-04-17 06:56:43.004566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.004717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.004742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.642 qpair failed and we were unable to recover it. 00:30:38.642 [2024-04-17 06:56:43.004894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.005047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.005087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.642 qpair failed and we were unable to recover it. 00:30:38.642 [2024-04-17 06:56:43.005271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.005413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.005440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.642 qpair failed and we were unable to recover it. 00:30:38.642 [2024-04-17 06:56:43.005644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.005773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.005798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.642 qpair failed and we were unable to recover it. 00:30:38.642 [2024-04-17 06:56:43.005965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.006119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.006159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.642 qpair failed and we were unable to recover it. 00:30:38.642 [2024-04-17 06:56:43.006321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.006443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.006486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.642 qpair failed and we were unable to recover it. 00:30:38.642 [2024-04-17 06:56:43.006660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.006842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.006867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.642 qpair failed and we were unable to recover it. 00:30:38.642 [2024-04-17 06:56:43.006992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.007119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.007144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.642 qpair failed and we were unable to recover it. 00:30:38.642 [2024-04-17 06:56:43.007324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.007504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.007550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.642 qpair failed and we were unable to recover it. 00:30:38.642 [2024-04-17 06:56:43.007737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.007899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.007924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.642 qpair failed and we were unable to recover it. 00:30:38.642 [2024-04-17 06:56:43.008102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.008288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.008317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.642 qpair failed and we were unable to recover it. 00:30:38.642 [2024-04-17 06:56:43.008501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.008683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.008726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.642 qpair failed and we were unable to recover it. 00:30:38.642 [2024-04-17 06:56:43.008889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.009024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.009049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.642 qpair failed and we were unable to recover it. 00:30:38.642 [2024-04-17 06:56:43.009187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.009334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.009362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.642 qpair failed and we were unable to recover it. 00:30:38.642 [2024-04-17 06:56:43.009496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.009697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.009722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.642 qpair failed and we were unable to recover it. 00:30:38.642 [2024-04-17 06:56:43.009849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.010061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.010086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.642 qpair failed and we were unable to recover it. 00:30:38.642 [2024-04-17 06:56:43.010240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.010390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.010419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.642 qpair failed and we were unable to recover it. 00:30:38.642 [2024-04-17 06:56:43.010622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.010792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.010820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.642 qpair failed and we were unable to recover it. 00:30:38.642 [2024-04-17 06:56:43.011024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.011196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.011222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.642 qpair failed and we were unable to recover it. 00:30:38.642 [2024-04-17 06:56:43.011346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.642 [2024-04-17 06:56:43.011476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.011501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.643 qpair failed and we were unable to recover it. 00:30:38.643 [2024-04-17 06:56:43.011655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.011774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.011799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.643 qpair failed and we were unable to recover it. 00:30:38.643 [2024-04-17 06:56:43.011944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.012099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.012124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.643 qpair failed and we were unable to recover it. 00:30:38.643 [2024-04-17 06:56:43.012368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.012520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.012545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.643 qpair failed and we were unable to recover it. 00:30:38.643 [2024-04-17 06:56:43.012664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.012786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.012812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.643 qpair failed and we were unable to recover it. 00:30:38.643 [2024-04-17 06:56:43.012937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.013104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.013129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.643 qpair failed and we were unable to recover it. 00:30:38.643 [2024-04-17 06:56:43.013337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.013517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.013542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.643 qpair failed and we were unable to recover it. 00:30:38.643 [2024-04-17 06:56:43.013672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.013825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.013854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.643 qpair failed and we were unable to recover it. 00:30:38.643 [2024-04-17 06:56:43.014006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.014216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.014242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.643 qpair failed and we were unable to recover it. 00:30:38.643 [2024-04-17 06:56:43.014424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.014627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.014655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.643 qpair failed and we were unable to recover it. 00:30:38.643 [2024-04-17 06:56:43.014886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.015076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.015103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.643 qpair failed and we were unable to recover it. 00:30:38.643 [2024-04-17 06:56:43.015278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.015414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.015442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.643 qpair failed and we were unable to recover it. 00:30:38.643 [2024-04-17 06:56:43.015615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.015784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.015811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.643 qpair failed and we were unable to recover it. 00:30:38.643 [2024-04-17 06:56:43.015965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.016122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.016180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.643 qpair failed and we were unable to recover it. 00:30:38.643 [2024-04-17 06:56:43.016344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.016514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.016543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.643 qpair failed and we were unable to recover it. 00:30:38.643 [2024-04-17 06:56:43.016707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.016886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.016930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.643 qpair failed and we were unable to recover it. 00:30:38.643 [2024-04-17 06:56:43.017065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.017244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.017272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.643 qpair failed and we were unable to recover it. 00:30:38.643 [2024-04-17 06:56:43.017478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.017674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.017702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.643 qpair failed and we were unable to recover it. 00:30:38.643 [2024-04-17 06:56:43.017896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.018033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.018058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.643 qpair failed and we were unable to recover it. 00:30:38.643 [2024-04-17 06:56:43.018185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.018343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.018368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.643 qpair failed and we were unable to recover it. 00:30:38.643 [2024-04-17 06:56:43.018501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.018698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.018725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.643 qpair failed and we were unable to recover it. 00:30:38.643 [2024-04-17 06:56:43.018904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.019100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.643 [2024-04-17 06:56:43.019127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.643 qpair failed and we were unable to recover it. 00:30:38.643 [2024-04-17 06:56:43.019347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.019541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.019568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.644 qpair failed and we were unable to recover it. 00:30:38.644 [2024-04-17 06:56:43.019776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.019934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.019961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.644 qpair failed and we were unable to recover it. 00:30:38.644 [2024-04-17 06:56:43.020107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.020274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.020303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.644 qpair failed and we were unable to recover it. 00:30:38.644 [2024-04-17 06:56:43.020460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.020615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.020655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.644 qpair failed and we were unable to recover it. 00:30:38.644 [2024-04-17 06:56:43.020795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.020972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.020999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.644 qpair failed and we were unable to recover it. 00:30:38.644 [2024-04-17 06:56:43.021174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.021306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.021331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.644 qpair failed and we were unable to recover it. 00:30:38.644 [2024-04-17 06:56:43.021547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.021738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.021783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.644 qpair failed and we were unable to recover it. 00:30:38.644 [2024-04-17 06:56:43.021927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.022134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.022173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.644 qpair failed and we were unable to recover it. 00:30:38.644 [2024-04-17 06:56:43.022340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.022466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.022491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.644 qpair failed and we were unable to recover it. 00:30:38.644 [2024-04-17 06:56:43.022662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.022790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.022818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.644 qpair failed and we were unable to recover it. 00:30:38.644 [2024-04-17 06:56:43.023014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.023211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.023240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.644 qpair failed and we were unable to recover it. 00:30:38.644 [2024-04-17 06:56:43.023410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.023536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.023561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.644 qpair failed and we were unable to recover it. 00:30:38.644 [2024-04-17 06:56:43.023700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.023863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.023891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.644 qpair failed and we were unable to recover it. 00:30:38.644 [2024-04-17 06:56:43.024103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.024225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.024251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.644 qpair failed and we were unable to recover it. 00:30:38.644 [2024-04-17 06:56:43.024398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.024582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.024611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.644 qpair failed and we were unable to recover it. 00:30:38.644 [2024-04-17 06:56:43.024829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.025021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.644 [2024-04-17 06:56:43.025048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.644 qpair failed and we were unable to recover it. 00:30:38.644 [2024-04-17 06:56:43.025237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.025424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.025450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.645 qpair failed and we were unable to recover it. 00:30:38.645 [2024-04-17 06:56:43.025587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.025789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.025817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.645 qpair failed and we were unable to recover it. 00:30:38.645 [2024-04-17 06:56:43.026006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.026159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.026195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.645 qpair failed and we were unable to recover it. 00:30:38.645 [2024-04-17 06:56:43.026380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.026563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.026591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.645 qpair failed and we were unable to recover it. 00:30:38.645 [2024-04-17 06:56:43.026759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.026960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.026985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.645 qpair failed and we were unable to recover it. 00:30:38.645 [2024-04-17 06:56:43.027200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.027342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.027370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.645 qpair failed and we were unable to recover it. 00:30:38.645 [2024-04-17 06:56:43.027545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.027740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.027768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.645 qpair failed and we were unable to recover it. 00:30:38.645 [2024-04-17 06:56:43.027972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.028151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.028203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.645 qpair failed and we were unable to recover it. 00:30:38.645 [2024-04-17 06:56:43.028364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.028589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.028617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.645 qpair failed and we were unable to recover it. 00:30:38.645 [2024-04-17 06:56:43.028763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.028912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.028939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.645 qpair failed and we were unable to recover it. 00:30:38.645 [2024-04-17 06:56:43.029122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.029253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.029279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.645 qpair failed and we were unable to recover it. 00:30:38.645 [2024-04-17 06:56:43.029396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.029559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.029585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.645 qpair failed and we were unable to recover it. 00:30:38.645 [2024-04-17 06:56:43.029861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.030012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.030040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.645 qpair failed and we were unable to recover it. 00:30:38.645 [2024-04-17 06:56:43.030248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.030441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.030468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.645 qpair failed and we were unable to recover it. 00:30:38.645 [2024-04-17 06:56:43.030659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.030828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.030853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.645 qpair failed and we were unable to recover it. 00:30:38.645 [2024-04-17 06:56:43.031057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.031231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.031257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.645 qpair failed and we were unable to recover it. 00:30:38.645 [2024-04-17 06:56:43.031404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.031590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.031632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.645 qpair failed and we were unable to recover it. 00:30:38.645 [2024-04-17 06:56:43.031821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.031998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.032023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.645 qpair failed and we were unable to recover it. 00:30:38.645 [2024-04-17 06:56:43.032157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.032390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.645 [2024-04-17 06:56:43.032415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.645 qpair failed and we were unable to recover it. 00:30:38.645 [2024-04-17 06:56:43.032615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.032763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.032806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.646 qpair failed and we were unable to recover it. 00:30:38.646 [2024-04-17 06:56:43.033025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.033192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.033238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.646 qpair failed and we were unable to recover it. 00:30:38.646 [2024-04-17 06:56:43.033420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.033571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.033596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.646 qpair failed and we were unable to recover it. 00:30:38.646 [2024-04-17 06:56:43.033733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.033922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.033947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.646 qpair failed and we were unable to recover it. 00:30:38.646 [2024-04-17 06:56:43.034105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.034297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.034323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.646 qpair failed and we were unable to recover it. 00:30:38.646 [2024-04-17 06:56:43.034501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.034720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.034770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.646 qpair failed and we were unable to recover it. 00:30:38.646 [2024-04-17 06:56:43.034974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.035097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.035122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.646 qpair failed and we were unable to recover it. 00:30:38.646 [2024-04-17 06:56:43.035250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.035398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.035424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.646 qpair failed and we were unable to recover it. 00:30:38.646 [2024-04-17 06:56:43.035615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.035806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.035848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.646 qpair failed and we were unable to recover it. 00:30:38.646 [2024-04-17 06:56:43.035998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.036131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.036159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.646 qpair failed and we were unable to recover it. 00:30:38.646 [2024-04-17 06:56:43.036381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.036553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.036578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.646 qpair failed and we were unable to recover it. 00:30:38.646 [2024-04-17 06:56:43.036758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.036901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.036929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.646 qpair failed and we were unable to recover it. 00:30:38.646 [2024-04-17 06:56:43.037120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.037307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.037333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.646 qpair failed and we were unable to recover it. 00:30:38.646 [2024-04-17 06:56:43.037513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.037725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.037752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.646 qpair failed and we were unable to recover it. 00:30:38.646 [2024-04-17 06:56:43.037923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.038105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.038133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.646 qpair failed and we were unable to recover it. 00:30:38.646 [2024-04-17 06:56:43.038329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.038469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.038494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.646 qpair failed and we were unable to recover it. 00:30:38.646 [2024-04-17 06:56:43.038675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.038843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.038870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.646 qpair failed and we were unable to recover it. 00:30:38.646 [2024-04-17 06:56:43.039043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.039207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.039236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.646 qpair failed and we were unable to recover it. 00:30:38.646 [2024-04-17 06:56:43.039404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.039564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.039592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.646 qpair failed and we were unable to recover it. 00:30:38.646 [2024-04-17 06:56:43.039734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.646 [2024-04-17 06:56:43.039897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.039925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.647 qpair failed and we were unable to recover it. 00:30:38.647 [2024-04-17 06:56:43.040078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.040275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.040304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.647 qpair failed and we were unable to recover it. 00:30:38.647 [2024-04-17 06:56:43.040450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.040621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.040648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.647 qpair failed and we were unable to recover it. 00:30:38.647 [2024-04-17 06:56:43.040815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.041010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.041038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.647 qpair failed and we were unable to recover it. 00:30:38.647 [2024-04-17 06:56:43.041172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.041342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.041370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.647 qpair failed and we were unable to recover it. 00:30:38.647 [2024-04-17 06:56:43.041545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.041664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.041689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.647 qpair failed and we were unable to recover it. 00:30:38.647 [2024-04-17 06:56:43.041876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.042046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.042074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.647 qpair failed and we were unable to recover it. 00:30:38.647 [2024-04-17 06:56:43.042212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.042382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.042410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.647 qpair failed and we were unable to recover it. 00:30:38.647 [2024-04-17 06:56:43.042587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.042756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.042784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.647 qpair failed and we were unable to recover it. 00:30:38.647 [2024-04-17 06:56:43.042936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.043089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.043129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.647 qpair failed and we were unable to recover it. 00:30:38.647 [2024-04-17 06:56:43.043297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.043472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.043499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.647 qpair failed and we were unable to recover it. 00:30:38.647 [2024-04-17 06:56:43.043670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.043867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.043894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.647 qpair failed and we were unable to recover it. 00:30:38.647 [2024-04-17 06:56:43.044042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.044241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.044269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.647 qpair failed and we were unable to recover it. 00:30:38.647 [2024-04-17 06:56:43.044445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.044594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.044636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.647 qpair failed and we were unable to recover it. 00:30:38.647 [2024-04-17 06:56:43.044781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.044977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.045005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.647 qpair failed and we were unable to recover it. 00:30:38.647 [2024-04-17 06:56:43.045220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.045415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.045443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.647 qpair failed and we were unable to recover it. 00:30:38.647 [2024-04-17 06:56:43.045579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.045745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.045773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.647 qpair failed and we were unable to recover it. 00:30:38.647 [2024-04-17 06:56:43.045918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.046089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.046130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.647 qpair failed and we were unable to recover it. 00:30:38.647 [2024-04-17 06:56:43.046298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.046474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.647 [2024-04-17 06:56:43.046502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.647 qpair failed and we were unable to recover it. 00:30:38.647 [2024-04-17 06:56:43.046682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.046860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.046903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.648 qpair failed and we were unable to recover it. 00:30:38.648 [2024-04-17 06:56:43.047098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.047256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.047282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.648 qpair failed and we were unable to recover it. 00:30:38.648 [2024-04-17 06:56:43.047434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.047564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.047604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.648 qpair failed and we were unable to recover it. 00:30:38.648 [2024-04-17 06:56:43.047803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.047962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.048006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.648 qpair failed and we were unable to recover it. 00:30:38.648 [2024-04-17 06:56:43.048186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.048361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.048390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.648 qpair failed and we were unable to recover it. 00:30:38.648 [2024-04-17 06:56:43.048569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.048724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.048765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.648 qpair failed and we were unable to recover it. 00:30:38.648 [2024-04-17 06:56:43.048945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.049095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.049120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.648 qpair failed and we were unable to recover it. 00:30:38.648 [2024-04-17 06:56:43.049259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.049449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.049479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.648 qpair failed and we were unable to recover it. 00:30:38.648 [2024-04-17 06:56:43.049606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.049817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.049845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.648 qpair failed and we were unable to recover it. 00:30:38.648 [2024-04-17 06:56:43.049984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.050154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.050189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.648 qpair failed and we were unable to recover it. 00:30:38.648 [2024-04-17 06:56:43.050330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.050467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.050492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.648 qpair failed and we were unable to recover it. 00:30:38.648 [2024-04-17 06:56:43.050639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.050760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.050786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.648 qpair failed and we were unable to recover it. 00:30:38.648 [2024-04-17 06:56:43.050944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.051152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.051187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.648 qpair failed and we were unable to recover it. 00:30:38.648 [2024-04-17 06:56:43.051386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.051589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.051614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.648 qpair failed and we were unable to recover it. 00:30:38.648 [2024-04-17 06:56:43.051760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.051920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.051967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.648 qpair failed and we were unable to recover it. 00:30:38.648 [2024-04-17 06:56:43.052141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.052337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.052363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.648 qpair failed and we were unable to recover it. 00:30:38.648 [2024-04-17 06:56:43.052514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.052685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.052713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.648 qpair failed and we were unable to recover it. 00:30:38.648 [2024-04-17 06:56:43.052886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.053055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.053082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.648 qpair failed and we were unable to recover it. 00:30:38.648 [2024-04-17 06:56:43.053249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.053428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.648 [2024-04-17 06:56:43.053453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.648 qpair failed and we were unable to recover it. 00:30:38.649 [2024-04-17 06:56:43.053611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.053760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.053785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.649 qpair failed and we were unable to recover it. 00:30:38.649 [2024-04-17 06:56:43.053914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.054070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.054097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.649 qpair failed and we were unable to recover it. 00:30:38.649 [2024-04-17 06:56:43.054284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.054443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.054468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.649 qpair failed and we were unable to recover it. 00:30:38.649 [2024-04-17 06:56:43.054650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.054855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.054883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.649 qpair failed and we were unable to recover it. 00:30:38.649 [2024-04-17 06:56:43.055018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.055152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.055196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.649 qpair failed and we were unable to recover it. 00:30:38.649 [2024-04-17 06:56:43.055351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.055497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.055526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.649 qpair failed and we were unable to recover it. 00:30:38.649 [2024-04-17 06:56:43.055688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.055859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.055888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.649 qpair failed and we were unable to recover it. 00:30:38.649 [2024-04-17 06:56:43.056066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.056234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.056261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.649 qpair failed and we were unable to recover it. 00:30:38.649 [2024-04-17 06:56:43.056385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.056524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.056549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.649 qpair failed and we were unable to recover it. 00:30:38.649 [2024-04-17 06:56:43.056783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.056983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.057011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.649 qpair failed and we were unable to recover it. 00:30:38.649 [2024-04-17 06:56:43.057180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.057361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.057388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.649 qpair failed and we were unable to recover it. 00:30:38.649 [2024-04-17 06:56:43.057546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.057680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.057705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.649 qpair failed and we were unable to recover it. 00:30:38.649 [2024-04-17 06:56:43.057873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.058029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.058054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.649 qpair failed and we were unable to recover it. 00:30:38.649 [2024-04-17 06:56:43.058213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.058464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.058492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.649 qpair failed and we were unable to recover it. 00:30:38.649 [2024-04-17 06:56:43.058691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.058852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.058880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.649 qpair failed and we were unable to recover it. 00:30:38.649 [2024-04-17 06:56:43.059043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.059226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.059252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.649 qpair failed and we were unable to recover it. 00:30:38.649 [2024-04-17 06:56:43.059409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.059567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.059609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.649 qpair failed and we were unable to recover it. 00:30:38.649 [2024-04-17 06:56:43.059743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.059873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.059900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.649 qpair failed and we were unable to recover it. 00:30:38.649 [2024-04-17 06:56:43.060146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.060330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.060356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.649 qpair failed and we were unable to recover it. 00:30:38.649 [2024-04-17 06:56:43.060486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.649 [2024-04-17 06:56:43.060611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.060636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.650 qpair failed and we were unable to recover it. 00:30:38.650 [2024-04-17 06:56:43.060763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.060912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.060937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.650 qpair failed and we were unable to recover it. 00:30:38.650 [2024-04-17 06:56:43.061085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.061247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.061290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.650 qpair failed and we were unable to recover it. 00:30:38.650 [2024-04-17 06:56:43.061470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.061626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.061651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.650 qpair failed and we were unable to recover it. 00:30:38.650 [2024-04-17 06:56:43.061781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.061913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.061940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.650 qpair failed and we were unable to recover it. 00:30:38.650 [2024-04-17 06:56:43.062160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.062298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.062323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.650 qpair failed and we were unable to recover it. 00:30:38.650 [2024-04-17 06:56:43.062452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.062607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.062632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.650 qpair failed and we were unable to recover it. 00:30:38.650 [2024-04-17 06:56:43.062835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.062970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.062999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.650 qpair failed and we were unable to recover it. 00:30:38.650 [2024-04-17 06:56:43.063158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.063333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.063359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.650 qpair failed and we were unable to recover it. 00:30:38.650 [2024-04-17 06:56:43.063565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.063707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.063735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.650 qpair failed and we were unable to recover it. 00:30:38.650 [2024-04-17 06:56:43.063924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.064054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.064081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.650 qpair failed and we were unable to recover it. 00:30:38.650 [2024-04-17 06:56:43.064262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.064399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.064426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.650 qpair failed and we were unable to recover it. 00:30:38.650 [2024-04-17 06:56:43.064598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.064744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.064785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.650 qpair failed and we were unable to recover it. 00:30:38.650 [2024-04-17 06:56:43.064920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.065095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.065120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.650 qpair failed and we were unable to recover it. 00:30:38.650 [2024-04-17 06:56:43.065271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.065461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.065486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.650 qpair failed and we were unable to recover it. 00:30:38.650 [2024-04-17 06:56:43.065669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.065847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.065872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.650 qpair failed and we were unable to recover it. 00:30:38.650 [2024-04-17 06:56:43.066018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.066164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.066198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.650 qpair failed and we were unable to recover it. 00:30:38.650 [2024-04-17 06:56:43.066329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.066509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.650 [2024-04-17 06:56:43.066550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.651 qpair failed and we were unable to recover it. 00:30:38.651 [2024-04-17 06:56:43.066690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.066861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.066890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.651 qpair failed and we were unable to recover it. 00:30:38.651 [2024-04-17 06:56:43.067067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.067244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.067273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.651 qpair failed and we were unable to recover it. 00:30:38.651 [2024-04-17 06:56:43.067466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.067624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.067649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.651 qpair failed and we were unable to recover it. 00:30:38.651 [2024-04-17 06:56:43.067770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.067895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.067920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.651 qpair failed and we were unable to recover it. 00:30:38.651 [2024-04-17 06:56:43.068050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.068223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.068252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.651 qpair failed and we were unable to recover it. 00:30:38.651 [2024-04-17 06:56:43.068391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.068558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.068586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.651 qpair failed and we were unable to recover it. 00:30:38.651 [2024-04-17 06:56:43.068748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.068896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.068937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.651 qpair failed and we were unable to recover it. 00:30:38.651 [2024-04-17 06:56:43.069187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.069432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.069460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.651 qpair failed and we were unable to recover it. 00:30:38.651 [2024-04-17 06:56:43.069603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.069773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.069801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.651 qpair failed and we were unable to recover it. 00:30:38.651 [2024-04-17 06:56:43.069965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.070127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.070158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.651 qpair failed and we were unable to recover it. 00:30:38.651 [2024-04-17 06:56:43.070336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.070457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.070507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.651 qpair failed and we were unable to recover it. 00:30:38.651 [2024-04-17 06:56:43.070666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.070840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.070867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.651 qpair failed and we were unable to recover it. 00:30:38.651 [2024-04-17 06:56:43.071016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.071202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.071231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.651 qpair failed and we were unable to recover it. 00:30:38.651 [2024-04-17 06:56:43.071393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.071526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.071554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.651 qpair failed and we were unable to recover it. 00:30:38.651 [2024-04-17 06:56:43.071747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.071951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.071979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.651 qpair failed and we were unable to recover it. 00:30:38.651 [2024-04-17 06:56:43.072184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.072320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.072345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.651 qpair failed and we were unable to recover it. 00:30:38.651 [2024-04-17 06:56:43.072525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.651 [2024-04-17 06:56:43.072772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.072831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.652 qpair failed and we were unable to recover it. 00:30:38.652 [2024-04-17 06:56:43.073028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.073225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.073254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.652 qpair failed and we were unable to recover it. 00:30:38.652 [2024-04-17 06:56:43.073415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.073601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.073642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.652 qpair failed and we were unable to recover it. 00:30:38.652 [2024-04-17 06:56:43.073851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.073975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.074000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.652 qpair failed and we were unable to recover it. 00:30:38.652 [2024-04-17 06:56:43.074187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.074350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.074378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.652 qpair failed and we were unable to recover it. 00:30:38.652 [2024-04-17 06:56:43.074578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.074773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.074800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.652 qpair failed and we were unable to recover it. 00:30:38.652 [2024-04-17 06:56:43.074981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.075132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.075200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.652 qpair failed and we were unable to recover it. 00:30:38.652 [2024-04-17 06:56:43.075387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.075557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.075584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.652 qpair failed and we were unable to recover it. 00:30:38.652 [2024-04-17 06:56:43.075726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.075897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.075925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.652 qpair failed and we were unable to recover it. 00:30:38.652 [2024-04-17 06:56:43.076135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.076301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.076327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.652 qpair failed and we were unable to recover it. 00:30:38.652 [2024-04-17 06:56:43.076453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.076610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.076651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.652 qpair failed and we were unable to recover it. 00:30:38.652 [2024-04-17 06:56:43.076833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.077010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.077035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.652 qpair failed and we were unable to recover it. 00:30:38.652 [2024-04-17 06:56:43.077259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.077394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.077419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.652 qpair failed and we were unable to recover it. 00:30:38.652 [2024-04-17 06:56:43.077626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.077842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.077867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.652 qpair failed and we were unable to recover it. 00:30:38.652 [2024-04-17 06:56:43.078054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.078213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.078239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.652 qpair failed and we were unable to recover it. 00:30:38.652 [2024-04-17 06:56:43.078447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.078617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.078645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.652 qpair failed and we were unable to recover it. 00:30:38.652 [2024-04-17 06:56:43.078833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.078995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.079020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.652 qpair failed and we were unable to recover it. 00:30:38.652 [2024-04-17 06:56:43.079195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.079381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.079406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.652 qpair failed and we were unable to recover it. 00:30:38.652 [2024-04-17 06:56:43.079561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.079759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.079786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.652 qpair failed and we were unable to recover it. 00:30:38.652 [2024-04-17 06:56:43.079979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.080132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.080157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.652 qpair failed and we were unable to recover it. 00:30:38.652 [2024-04-17 06:56:43.080396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.080529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.080554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.652 qpair failed and we were unable to recover it. 00:30:38.652 [2024-04-17 06:56:43.080688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.080873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.080900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.652 qpair failed and we were unable to recover it. 00:30:38.652 [2024-04-17 06:56:43.081098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.081257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.652 [2024-04-17 06:56:43.081283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.652 qpair failed and we were unable to recover it. 00:30:38.652 [2024-04-17 06:56:43.081416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.081573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.081599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.653 qpair failed and we were unable to recover it. 00:30:38.653 [2024-04-17 06:56:43.081762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.081921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.081948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.653 qpair failed and we were unable to recover it. 00:30:38.653 [2024-04-17 06:56:43.082106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.082272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.082301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.653 qpair failed and we were unable to recover it. 00:30:38.653 [2024-04-17 06:56:43.082486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.082666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.082691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.653 qpair failed and we were unable to recover it. 00:30:38.653 [2024-04-17 06:56:43.082888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.083009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.083034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.653 qpair failed and we were unable to recover it. 00:30:38.653 [2024-04-17 06:56:43.083200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.083387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.083416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.653 qpair failed and we were unable to recover it. 00:30:38.653 [2024-04-17 06:56:43.083634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.083787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.083827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.653 qpair failed and we were unable to recover it. 00:30:38.653 [2024-04-17 06:56:43.084004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.084160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.084194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.653 qpair failed and we were unable to recover it. 00:30:38.653 [2024-04-17 06:56:43.084427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.084582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.084622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.653 qpair failed and we were unable to recover it. 00:30:38.653 [2024-04-17 06:56:43.084784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.084921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.084948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.653 qpair failed and we were unable to recover it. 00:30:38.653 [2024-04-17 06:56:43.085151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.085317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.085342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.653 qpair failed and we were unable to recover it. 00:30:38.653 [2024-04-17 06:56:43.085470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.085655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.085680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.653 qpair failed and we were unable to recover it. 00:30:38.653 [2024-04-17 06:56:43.085854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.086079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.086107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.653 qpair failed and we were unable to recover it. 00:30:38.653 [2024-04-17 06:56:43.086292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.086426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.086466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.653 qpair failed and we were unable to recover it. 00:30:38.653 [2024-04-17 06:56:43.086600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.086749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.086777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.653 qpair failed and we were unable to recover it. 00:30:38.653 [2024-04-17 06:56:43.086933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.087051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.087076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.653 qpair failed and we were unable to recover it. 00:30:38.653 [2024-04-17 06:56:43.087286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.087457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.087485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.653 qpair failed and we were unable to recover it. 00:30:38.653 [2024-04-17 06:56:43.087623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.087799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.087829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.653 qpair failed and we were unable to recover it. 00:30:38.653 [2024-04-17 06:56:43.087995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.088133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.088161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.653 qpair failed and we were unable to recover it. 00:30:38.653 [2024-04-17 06:56:43.088319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.088446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.088471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.653 qpair failed and we were unable to recover it. 00:30:38.653 [2024-04-17 06:56:43.088670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.088826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.088854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.653 qpair failed and we were unable to recover it. 00:30:38.653 [2024-04-17 06:56:43.089051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.089199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.089233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.653 qpair failed and we were unable to recover it. 00:30:38.653 [2024-04-17 06:56:43.089398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.089554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.089586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.653 qpair failed and we were unable to recover it. 00:30:38.653 [2024-04-17 06:56:43.089763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.089922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.089964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.653 qpair failed and we were unable to recover it. 00:30:38.653 [2024-04-17 06:56:43.090158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.090333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.090360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.653 qpair failed and we were unable to recover it. 00:30:38.653 [2024-04-17 06:56:43.090534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.090682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.653 [2024-04-17 06:56:43.090707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.653 qpair failed and we were unable to recover it. 00:30:38.653 [2024-04-17 06:56:43.090841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.090980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.091007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.654 qpair failed and we were unable to recover it. 00:30:38.654 [2024-04-17 06:56:43.091160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.091327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.091352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.654 qpair failed and we were unable to recover it. 00:30:38.654 [2024-04-17 06:56:43.091528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.091663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.091692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.654 qpair failed and we were unable to recover it. 00:30:38.654 [2024-04-17 06:56:43.091891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.092056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.092084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.654 qpair failed and we were unable to recover it. 00:30:38.654 [2024-04-17 06:56:43.092242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.092445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.092474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.654 qpair failed and we were unable to recover it. 00:30:38.654 [2024-04-17 06:56:43.092648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.092817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.092850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.654 qpair failed and we were unable to recover it. 00:30:38.654 [2024-04-17 06:56:43.093051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.093252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.093281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.654 qpair failed and we were unable to recover it. 00:30:38.654 [2024-04-17 06:56:43.093417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.093574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.093602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.654 qpair failed and we were unable to recover it. 00:30:38.654 [2024-04-17 06:56:43.093738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.093879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.093906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.654 qpair failed and we were unable to recover it. 00:30:38.654 [2024-04-17 06:56:43.094053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.094202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.094228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.654 qpair failed and we were unable to recover it. 00:30:38.654 [2024-04-17 06:56:43.094387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.094531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.094559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.654 qpair failed and we were unable to recover it. 00:30:38.654 [2024-04-17 06:56:43.094767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.094887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.094912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.654 qpair failed and we were unable to recover it. 00:30:38.654 [2024-04-17 06:56:43.095068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.095212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.095241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.654 qpair failed and we were unable to recover it. 00:30:38.654 [2024-04-17 06:56:43.095410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.095541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.095566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.654 qpair failed and we were unable to recover it. 00:30:38.654 [2024-04-17 06:56:43.095717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.095863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.095892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.654 qpair failed and we were unable to recover it. 00:30:38.654 [2024-04-17 06:56:43.096086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.096263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.096291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.654 qpair failed and we were unable to recover it. 00:30:38.654 [2024-04-17 06:56:43.096447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.096612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.096638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.654 qpair failed and we were unable to recover it. 00:30:38.654 [2024-04-17 06:56:43.096796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.096943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.096968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.654 qpair failed and we were unable to recover it. 00:30:38.654 [2024-04-17 06:56:43.097151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.097310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.097339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.654 qpair failed and we were unable to recover it. 00:30:38.654 [2024-04-17 06:56:43.097510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.097647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.097674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.654 qpair failed and we were unable to recover it. 00:30:38.654 [2024-04-17 06:56:43.097870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.098001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.098026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.654 qpair failed and we were unable to recover it. 00:30:38.654 [2024-04-17 06:56:43.098185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.098350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.098378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.654 qpair failed and we were unable to recover it. 00:30:38.654 [2024-04-17 06:56:43.098522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.098684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.098712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.654 qpair failed and we were unable to recover it. 00:30:38.654 [2024-04-17 06:56:43.098849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.099049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.099074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.654 qpair failed and we were unable to recover it. 00:30:38.654 [2024-04-17 06:56:43.099258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.099431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.099459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.654 qpair failed and we were unable to recover it. 00:30:38.654 [2024-04-17 06:56:43.099599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.099758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.099783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.654 qpair failed and we were unable to recover it. 00:30:38.654 [2024-04-17 06:56:43.099988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.100119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.100145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.654 qpair failed and we were unable to recover it. 00:30:38.654 [2024-04-17 06:56:43.100299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.100422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.100447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.654 qpair failed and we were unable to recover it. 00:30:38.654 [2024-04-17 06:56:43.100599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.654 [2024-04-17 06:56:43.100721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.100746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.655 qpair failed and we were unable to recover it. 00:30:38.655 [2024-04-17 06:56:43.100878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.101053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.101081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.655 qpair failed and we were unable to recover it. 00:30:38.655 [2024-04-17 06:56:43.101254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.101422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.101447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.655 qpair failed and we were unable to recover it. 00:30:38.655 [2024-04-17 06:56:43.101711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.101882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.101910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.655 qpair failed and we were unable to recover it. 00:30:38.655 [2024-04-17 06:56:43.102073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.102258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.102286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.655 qpair failed and we were unable to recover it. 00:30:38.655 [2024-04-17 06:56:43.102465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.102666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.102693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.655 qpair failed and we were unable to recover it. 00:30:38.655 [2024-04-17 06:56:43.102856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.103028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.103055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.655 qpair failed and we were unable to recover it. 00:30:38.655 [2024-04-17 06:56:43.103254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.103416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.103444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.655 qpair failed and we were unable to recover it. 00:30:38.655 [2024-04-17 06:56:43.103621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.103760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.103786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.655 qpair failed and we were unable to recover it. 00:30:38.655 [2024-04-17 06:56:43.103947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.104206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.104234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.655 qpair failed and we were unable to recover it. 00:30:38.655 [2024-04-17 06:56:43.104430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.104613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.104640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.655 qpair failed and we were unable to recover it. 00:30:38.655 [2024-04-17 06:56:43.104783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.104960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.104990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.655 qpair failed and we were unable to recover it. 00:30:38.655 [2024-04-17 06:56:43.105169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.105327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.105353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.655 qpair failed and we were unable to recover it. 00:30:38.655 [2024-04-17 06:56:43.105503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.105626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.105676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.655 qpair failed and we were unable to recover it. 00:30:38.655 [2024-04-17 06:56:43.105900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.106039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.106067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.655 qpair failed and we were unable to recover it. 00:30:38.655 [2024-04-17 06:56:43.106234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.106435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.106460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.655 qpair failed and we were unable to recover it. 00:30:38.655 [2024-04-17 06:56:43.106698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.106904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.106929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.655 qpair failed and we were unable to recover it. 00:30:38.655 [2024-04-17 06:56:43.107122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.107252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.107278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.655 qpair failed and we were unable to recover it. 00:30:38.655 [2024-04-17 06:56:43.107412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.107591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.107619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.655 qpair failed and we were unable to recover it. 00:30:38.655 [2024-04-17 06:56:43.107793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.107929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.107957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.655 qpair failed and we were unable to recover it. 00:30:38.655 [2024-04-17 06:56:43.108167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.108372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.108400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.655 qpair failed and we were unable to recover it. 00:30:38.655 [2024-04-17 06:56:43.108555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.108713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.108756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.655 qpair failed and we were unable to recover it. 00:30:38.655 [2024-04-17 06:56:43.108936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.109072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.109099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.655 qpair failed and we were unable to recover it. 00:30:38.655 [2024-04-17 06:56:43.109269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.109470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.109502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.655 qpair failed and we were unable to recover it. 00:30:38.655 [2024-04-17 06:56:43.109649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.109817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.109844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.655 qpair failed and we were unable to recover it. 00:30:38.655 [2024-04-17 06:56:43.110020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.110199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.110227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.655 qpair failed and we were unable to recover it. 00:30:38.655 [2024-04-17 06:56:43.110369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.110565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.110592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.655 qpair failed and we were unable to recover it. 00:30:38.655 [2024-04-17 06:56:43.110737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.110905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.110932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.655 qpair failed and we were unable to recover it. 00:30:38.655 [2024-04-17 06:56:43.111070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.111209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.111244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.655 qpair failed and we were unable to recover it. 00:30:38.655 [2024-04-17 06:56:43.111421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.655 [2024-04-17 06:56:43.111621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.111649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.656 qpair failed and we were unable to recover it. 00:30:38.656 [2024-04-17 06:56:43.111797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.111919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.111945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.656 qpair failed and we were unable to recover it. 00:30:38.656 [2024-04-17 06:56:43.112138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.112316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.112344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.656 qpair failed and we were unable to recover it. 00:30:38.656 [2024-04-17 06:56:43.112552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.112754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.112781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.656 qpair failed and we were unable to recover it. 00:30:38.656 [2024-04-17 06:56:43.112915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.113065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.113090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.656 qpair failed and we were unable to recover it. 00:30:38.656 [2024-04-17 06:56:43.113265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.113398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.113426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.656 qpair failed and we were unable to recover it. 00:30:38.656 [2024-04-17 06:56:43.113630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.113812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.113839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.656 qpair failed and we were unable to recover it. 00:30:38.656 [2024-04-17 06:56:43.114006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.114187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.114215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.656 qpair failed and we were unable to recover it. 00:30:38.656 [2024-04-17 06:56:43.114359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.114486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.114511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.656 qpair failed and we were unable to recover it. 00:30:38.656 [2024-04-17 06:56:43.114632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.114922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.114990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.656 qpair failed and we were unable to recover it. 00:30:38.656 [2024-04-17 06:56:43.115202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.115343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.115371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.656 qpair failed and we were unable to recover it. 00:30:38.656 [2024-04-17 06:56:43.115557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.115711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.115736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.656 qpair failed and we were unable to recover it. 00:30:38.656 [2024-04-17 06:56:43.115867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.116030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.116070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.656 qpair failed and we were unable to recover it. 00:30:38.656 [2024-04-17 06:56:43.116234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.116442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.116471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.656 qpair failed and we were unable to recover it. 00:30:38.656 [2024-04-17 06:56:43.116623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.116764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.116793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.656 qpair failed and we were unable to recover it. 00:30:38.656 [2024-04-17 06:56:43.116970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.117107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.117135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.656 qpair failed and we were unable to recover it. 00:30:38.656 [2024-04-17 06:56:43.117314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.117451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.117502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.656 qpair failed and we were unable to recover it. 00:30:38.656 [2024-04-17 06:56:43.117673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.117842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.117870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.656 qpair failed and we were unable to recover it. 00:30:38.656 [2024-04-17 06:56:43.118050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.118251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.118277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.656 qpair failed and we were unable to recover it. 00:30:38.656 [2024-04-17 06:56:43.118426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.118573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.118601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.656 qpair failed and we were unable to recover it. 00:30:38.656 [2024-04-17 06:56:43.118754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.118907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.118946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.656 qpair failed and we were unable to recover it. 00:30:38.656 [2024-04-17 06:56:43.119156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.119360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.119389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.656 qpair failed and we were unable to recover it. 00:30:38.656 [2024-04-17 06:56:43.119556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.119820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.119847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.656 qpair failed and we were unable to recover it. 00:30:38.656 [2024-04-17 06:56:43.120055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.120181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.656 [2024-04-17 06:56:43.120207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.657 qpair failed and we were unable to recover it. 00:30:38.657 [2024-04-17 06:56:43.120358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.120541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.120569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.657 qpair failed and we were unable to recover it. 00:30:38.657 [2024-04-17 06:56:43.120726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.120885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.120913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.657 qpair failed and we were unable to recover it. 00:30:38.657 [2024-04-17 06:56:43.121097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.121260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.121302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.657 qpair failed and we were unable to recover it. 00:30:38.657 [2024-04-17 06:56:43.121487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.121637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.121662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.657 qpair failed and we were unable to recover it. 00:30:38.657 [2024-04-17 06:56:43.121843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.122035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.122063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.657 qpair failed and we were unable to recover it. 00:30:38.657 [2024-04-17 06:56:43.122281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.122539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.122567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.657 qpair failed and we were unable to recover it. 00:30:38.657 [2024-04-17 06:56:43.122784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.122973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.122998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.657 qpair failed and we were unable to recover it. 00:30:38.657 [2024-04-17 06:56:43.123144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.123358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.123386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.657 qpair failed and we were unable to recover it. 00:30:38.657 [2024-04-17 06:56:43.123550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.123705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.123747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.657 qpair failed and we were unable to recover it. 00:30:38.657 [2024-04-17 06:56:43.123928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.124080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.124105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.657 qpair failed and we were unable to recover it. 00:30:38.657 [2024-04-17 06:56:43.124256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.124423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.124451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.657 qpair failed and we were unable to recover it. 00:30:38.657 [2024-04-17 06:56:43.124629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.124796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.124824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.657 qpair failed and we were unable to recover it. 00:30:38.657 [2024-04-17 06:56:43.124981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.125161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.125195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.657 qpair failed and we were unable to recover it. 00:30:38.657 [2024-04-17 06:56:43.125371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.125528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.125553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.657 qpair failed and we were unable to recover it. 00:30:38.657 [2024-04-17 06:56:43.125751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.125998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.126058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.657 qpair failed and we were unable to recover it. 00:30:38.657 [2024-04-17 06:56:43.126304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.126482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.126510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.657 qpair failed and we were unable to recover it. 00:30:38.657 [2024-04-17 06:56:43.126680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.126841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.126867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.657 qpair failed and we were unable to recover it. 00:30:38.657 [2024-04-17 06:56:43.127045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.127235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.127261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.657 qpair failed and we were unable to recover it. 00:30:38.657 [2024-04-17 06:56:43.127466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.127660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.127687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.657 qpair failed and we were unable to recover it. 00:30:38.657 [2024-04-17 06:56:43.127872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.128019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.128044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.657 qpair failed and we were unable to recover it. 00:30:38.657 [2024-04-17 06:56:43.128248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.128377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.128402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.657 qpair failed and we were unable to recover it. 00:30:38.657 [2024-04-17 06:56:43.128550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.128746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.128773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.657 qpair failed and we were unable to recover it. 00:30:38.657 [2024-04-17 06:56:43.128943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.129097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.129121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.657 qpair failed and we were unable to recover it. 00:30:38.657 [2024-04-17 06:56:43.129317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.129442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.129484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.657 qpair failed and we were unable to recover it. 00:30:38.657 [2024-04-17 06:56:43.129688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.129860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.129887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.657 qpair failed and we were unable to recover it. 00:30:38.657 [2024-04-17 06:56:43.130031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.130217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.130243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.657 qpair failed and we were unable to recover it. 00:30:38.657 [2024-04-17 06:56:43.130373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.130550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.130582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.657 qpair failed and we were unable to recover it. 00:30:38.657 [2024-04-17 06:56:43.130753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.130950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.130978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.657 qpair failed and we were unable to recover it. 00:30:38.657 [2024-04-17 06:56:43.131173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.657 [2024-04-17 06:56:43.131355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.131383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.658 qpair failed and we were unable to recover it. 00:30:38.658 [2024-04-17 06:56:43.131555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.131799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.131862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.658 qpair failed and we were unable to recover it. 00:30:38.658 [2024-04-17 06:56:43.132039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.132237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.132265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.658 qpair failed and we were unable to recover it. 00:30:38.658 [2024-04-17 06:56:43.132430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.132575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.132603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.658 qpair failed and we were unable to recover it. 00:30:38.658 [2024-04-17 06:56:43.132777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.132911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.132952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.658 qpair failed and we were unable to recover it. 00:30:38.658 [2024-04-17 06:56:43.133093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.133273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.133299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.658 qpair failed and we were unable to recover it. 00:30:38.658 [2024-04-17 06:56:43.133423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.133576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.133605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.658 qpair failed and we were unable to recover it. 00:30:38.658 [2024-04-17 06:56:43.133777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.133911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.133938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.658 qpair failed and we were unable to recover it. 00:30:38.658 [2024-04-17 06:56:43.134107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.134254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.134279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.658 qpair failed and we were unable to recover it. 00:30:38.658 [2024-04-17 06:56:43.134409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.134655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.134683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.658 qpair failed and we were unable to recover it. 00:30:38.658 [2024-04-17 06:56:43.134876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.135047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.135074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.658 qpair failed and we were unable to recover it. 00:30:38.658 [2024-04-17 06:56:43.135241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.135408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.135436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.658 qpair failed and we were unable to recover it. 00:30:38.658 [2024-04-17 06:56:43.135652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.135777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.135802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.658 qpair failed and we were unable to recover it. 00:30:38.658 [2024-04-17 06:56:43.135955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.136093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.136120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.658 qpair failed and we were unable to recover it. 00:30:38.658 [2024-04-17 06:56:43.136296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.136452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.136487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.658 qpair failed and we were unable to recover it. 00:30:38.658 [2024-04-17 06:56:43.136623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.136784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.136811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.658 qpair failed and we were unable to recover it. 00:30:38.658 [2024-04-17 06:56:43.136990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.137209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.137238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.658 qpair failed and we were unable to recover it. 00:30:38.658 [2024-04-17 06:56:43.137387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.137586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.137613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.658 qpair failed and we were unable to recover it. 00:30:38.658 [2024-04-17 06:56:43.137785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.138024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.138073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.658 qpair failed and we were unable to recover it. 00:30:38.658 [2024-04-17 06:56:43.138265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.138387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.138412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.658 qpair failed and we were unable to recover it. 00:30:38.658 [2024-04-17 06:56:43.138566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.138771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.138802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.658 qpair failed and we were unable to recover it. 00:30:38.658 [2024-04-17 06:56:43.139054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.139227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.139257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.658 qpair failed and we were unable to recover it. 00:30:38.658 [2024-04-17 06:56:43.139428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.139624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.139651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.658 qpair failed and we were unable to recover it. 00:30:38.658 [2024-04-17 06:56:43.139844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.139983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.140010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.658 qpair failed and we were unable to recover it. 00:30:38.658 [2024-04-17 06:56:43.140182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.140338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.140380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.658 qpair failed and we were unable to recover it. 00:30:38.658 [2024-04-17 06:56:43.140626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.140824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.140851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.658 qpair failed and we were unable to recover it. 00:30:38.658 [2024-04-17 06:56:43.141025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.141209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.141237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.658 qpair failed and we were unable to recover it. 00:30:38.658 [2024-04-17 06:56:43.141407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.141576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.141604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.658 qpair failed and we were unable to recover it. 00:30:38.658 [2024-04-17 06:56:43.141752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.141909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.658 [2024-04-17 06:56:43.141934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.658 qpair failed and we were unable to recover it. 00:30:38.658 [2024-04-17 06:56:43.142156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.142318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.142344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.659 qpair failed and we were unable to recover it. 00:30:38.659 [2024-04-17 06:56:43.142513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.142642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.142670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.659 qpair failed and we were unable to recover it. 00:30:38.659 [2024-04-17 06:56:43.142872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.143042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.143070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.659 qpair failed and we were unable to recover it. 00:30:38.659 [2024-04-17 06:56:43.143253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.143377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.143402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.659 qpair failed and we were unable to recover it. 00:30:38.659 [2024-04-17 06:56:43.143583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.143752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.143780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.659 qpair failed and we were unable to recover it. 00:30:38.659 [2024-04-17 06:56:43.143921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.144119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.144146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.659 qpair failed and we were unable to recover it. 00:30:38.659 [2024-04-17 06:56:43.144336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.144507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.144536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.659 qpair failed and we were unable to recover it. 00:30:38.659 [2024-04-17 06:56:43.144716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.144918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.144945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.659 qpair failed and we were unable to recover it. 00:30:38.659 [2024-04-17 06:56:43.145142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.145367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.145396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.659 qpair failed and we were unable to recover it. 00:30:38.659 [2024-04-17 06:56:43.145537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.145712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.145740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.659 qpair failed and we were unable to recover it. 00:30:38.659 [2024-04-17 06:56:43.145915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.146053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.146081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.659 qpair failed and we were unable to recover it. 00:30:38.659 [2024-04-17 06:56:43.146282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.146437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.146470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.659 qpair failed and we were unable to recover it. 00:30:38.659 [2024-04-17 06:56:43.146596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.146746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.146771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.659 qpair failed and we were unable to recover it. 00:30:38.659 [2024-04-17 06:56:43.146943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.147077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.147105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.659 qpair failed and we were unable to recover it. 00:30:38.659 [2024-04-17 06:56:43.147289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.147488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.147516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.659 qpair failed and we were unable to recover it. 00:30:38.659 [2024-04-17 06:56:43.147687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.147805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.147830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.659 qpair failed and we were unable to recover it. 00:30:38.659 [2024-04-17 06:56:43.147980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.148181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.148210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.659 qpair failed and we were unable to recover it. 00:30:38.659 [2024-04-17 06:56:43.148386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.148525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.148553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.659 qpair failed and we were unable to recover it. 00:30:38.659 [2024-04-17 06:56:43.148730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.148888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.148913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.659 qpair failed and we were unable to recover it. 00:30:38.659 [2024-04-17 06:56:43.149097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.149301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.149330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.659 qpair failed and we were unable to recover it. 00:30:38.659 [2024-04-17 06:56:43.149514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.149698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.149730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.659 qpair failed and we were unable to recover it. 00:30:38.659 [2024-04-17 06:56:43.149899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.150038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.150065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.659 qpair failed and we were unable to recover it. 00:30:38.659 [2024-04-17 06:56:43.150312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.150517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.150542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.659 qpair failed and we were unable to recover it. 00:30:38.659 [2024-04-17 06:56:43.150703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.150860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.150885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.659 qpair failed and we were unable to recover it. 00:30:38.659 [2024-04-17 06:56:43.151024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.151189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.151231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.659 qpair failed and we were unable to recover it. 00:30:38.659 [2024-04-17 06:56:43.151401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.151582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.151615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.659 qpair failed and we were unable to recover it. 00:30:38.659 [2024-04-17 06:56:43.151791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.151912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.151937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.659 qpair failed and we were unable to recover it. 00:30:38.659 [2024-04-17 06:56:43.152117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.152324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.152350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.659 qpair failed and we were unable to recover it. 00:30:38.659 [2024-04-17 06:56:43.152496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.152709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.659 [2024-04-17 06:56:43.152734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.659 qpair failed and we were unable to recover it. 00:30:38.660 [2024-04-17 06:56:43.152882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.153022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.153050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.660 qpair failed and we were unable to recover it. 00:30:38.660 [2024-04-17 06:56:43.153213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.153393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.153423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.660 qpair failed and we were unable to recover it. 00:30:38.660 [2024-04-17 06:56:43.153580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.153730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.153755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.660 qpair failed and we were unable to recover it. 00:30:38.660 [2024-04-17 06:56:43.153919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.154092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.154120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.660 qpair failed and we were unable to recover it. 00:30:38.660 [2024-04-17 06:56:43.154302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.154504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.154529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.660 qpair failed and we were unable to recover it. 00:30:38.660 [2024-04-17 06:56:43.154704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.154907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.154932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.660 qpair failed and we were unable to recover it. 00:30:38.660 [2024-04-17 06:56:43.155090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.155300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.155329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.660 qpair failed and we were unable to recover it. 00:30:38.660 [2024-04-17 06:56:43.155466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.155669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.155694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.660 qpair failed and we were unable to recover it. 00:30:38.660 [2024-04-17 06:56:43.155843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.156041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.156069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.660 qpair failed and we were unable to recover it. 00:30:38.660 [2024-04-17 06:56:43.156233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.156417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.156442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.660 qpair failed and we were unable to recover it. 00:30:38.660 [2024-04-17 06:56:43.156631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.156856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.156883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.660 qpair failed and we were unable to recover it. 00:30:38.660 [2024-04-17 06:56:43.157070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.157204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.157246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.660 qpair failed and we were unable to recover it. 00:30:38.660 [2024-04-17 06:56:43.157383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.157526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.157554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.660 qpair failed and we were unable to recover it. 00:30:38.660 [2024-04-17 06:56:43.157732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.157882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.157907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.660 qpair failed and we were unable to recover it. 00:30:38.660 [2024-04-17 06:56:43.158047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.158240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.158268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.660 qpair failed and we were unable to recover it. 00:30:38.660 [2024-04-17 06:56:43.158445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.158713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.158740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.660 qpair failed and we were unable to recover it. 00:30:38.660 [2024-04-17 06:56:43.158920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.159078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.159103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.660 qpair failed and we were unable to recover it. 00:30:38.660 [2024-04-17 06:56:43.159294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.159435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.159486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.660 qpair failed and we were unable to recover it. 00:30:38.660 [2024-04-17 06:56:43.159689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.159834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.159861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.660 qpair failed and we were unable to recover it. 00:30:38.660 [2024-04-17 06:56:43.160035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.160251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.160277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.660 qpair failed and we were unable to recover it. 00:30:38.660 [2024-04-17 06:56:43.160476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.160645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.160670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.660 qpair failed and we were unable to recover it. 00:30:38.660 [2024-04-17 06:56:43.160826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.160974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.161002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.660 qpair failed and we were unable to recover it. 00:30:38.660 [2024-04-17 06:56:43.161191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.161394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.161421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.660 qpair failed and we were unable to recover it. 00:30:38.660 [2024-04-17 06:56:43.161637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.161829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.161857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.660 qpair failed and we were unable to recover it. 00:30:38.660 [2024-04-17 06:56:43.162054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.162227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.660 [2024-04-17 06:56:43.162256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.661 qpair failed and we were unable to recover it. 00:30:38.661 [2024-04-17 06:56:43.162422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.162561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.162588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.661 qpair failed and we were unable to recover it. 00:30:38.661 [2024-04-17 06:56:43.162785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.162952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.162979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.661 qpair failed and we were unable to recover it. 00:30:38.661 [2024-04-17 06:56:43.163130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.163311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.163337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.661 qpair failed and we were unable to recover it. 00:30:38.661 [2024-04-17 06:56:43.163522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.163702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.163730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.661 qpair failed and we were unable to recover it. 00:30:38.661 [2024-04-17 06:56:43.163910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.164055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.164082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.661 qpair failed and we were unable to recover it. 00:30:38.661 [2024-04-17 06:56:43.164263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.164472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.164499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.661 qpair failed and we were unable to recover it. 00:30:38.661 [2024-04-17 06:56:43.164640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.164854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.164882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.661 qpair failed and we were unable to recover it. 00:30:38.661 [2024-04-17 06:56:43.165057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.165306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.165335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.661 qpair failed and we were unable to recover it. 00:30:38.661 [2024-04-17 06:56:43.165473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.165676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.165701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.661 qpair failed and we were unable to recover it. 00:30:38.661 [2024-04-17 06:56:43.165879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.166001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.166025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.661 qpair failed and we were unable to recover it. 00:30:38.661 [2024-04-17 06:56:43.166211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.166388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.166415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.661 qpair failed and we were unable to recover it. 00:30:38.661 [2024-04-17 06:56:43.166589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.166745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.166774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.661 qpair failed and we were unable to recover it. 00:30:38.661 [2024-04-17 06:56:43.166940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.167100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.167128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.661 qpair failed and we were unable to recover it. 00:30:38.661 [2024-04-17 06:56:43.167309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.167430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.167471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.661 qpair failed and we were unable to recover it. 00:30:38.661 [2024-04-17 06:56:43.167636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.167793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.167820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.661 qpair failed and we were unable to recover it. 00:30:38.661 [2024-04-17 06:56:43.167958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.168121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.168148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.661 qpair failed and we were unable to recover it. 00:30:38.661 [2024-04-17 06:56:43.168309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.168460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.168484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.661 qpair failed and we were unable to recover it. 00:30:38.661 [2024-04-17 06:56:43.168668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.168856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.168884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.661 qpair failed and we were unable to recover it. 00:30:38.661 [2024-04-17 06:56:43.169067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.169256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.169282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.661 qpair failed and we were unable to recover it. 00:30:38.661 [2024-04-17 06:56:43.169462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.169656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.169683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.661 qpair failed and we were unable to recover it. 00:30:38.661 [2024-04-17 06:56:43.169850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.170005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.170029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.661 qpair failed and we were unable to recover it. 00:30:38.661 [2024-04-17 06:56:43.170216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.170379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.170404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.661 qpair failed and we were unable to recover it. 00:30:38.661 [2024-04-17 06:56:43.170556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.170695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.170723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.661 qpair failed and we were unable to recover it. 00:30:38.661 [2024-04-17 06:56:43.170862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.171047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.171075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.661 qpair failed and we were unable to recover it. 00:30:38.661 [2024-04-17 06:56:43.171240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.171424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.171450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.661 qpair failed and we were unable to recover it. 00:30:38.661 [2024-04-17 06:56:43.171610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.171778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.171806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.661 qpair failed and we were unable to recover it. 00:30:38.661 [2024-04-17 06:56:43.172004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.172183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.172211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.661 qpair failed and we were unable to recover it. 00:30:38.661 [2024-04-17 06:56:43.172368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.172553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.661 [2024-04-17 06:56:43.172601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.661 qpair failed and we were unable to recover it. 00:30:38.662 [2024-04-17 06:56:43.172734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.172930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.172957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.662 qpair failed and we were unable to recover it. 00:30:38.662 [2024-04-17 06:56:43.173136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.173310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.173336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.662 qpair failed and we were unable to recover it. 00:30:38.662 [2024-04-17 06:56:43.173489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.173646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.173687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.662 qpair failed and we were unable to recover it. 00:30:38.662 [2024-04-17 06:56:43.173829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.174030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.174057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.662 qpair failed and we were unable to recover it. 00:30:38.662 [2024-04-17 06:56:43.174248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.174404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.174429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.662 qpair failed and we were unable to recover it. 00:30:38.662 [2024-04-17 06:56:43.174658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.174780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.174822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.662 qpair failed and we were unable to recover it. 00:30:38.662 [2024-04-17 06:56:43.175005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.175186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.175229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.662 qpair failed and we were unable to recover it. 00:30:38.662 [2024-04-17 06:56:43.175404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.175545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.175574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.662 qpair failed and we were unable to recover it. 00:30:38.662 [2024-04-17 06:56:43.175738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.175907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.175932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.662 qpair failed and we were unable to recover it. 00:30:38.662 [2024-04-17 06:56:43.176115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.176268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.176309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.662 qpair failed and we were unable to recover it. 00:30:38.662 [2024-04-17 06:56:43.176506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.176632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.176657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.662 qpair failed and we were unable to recover it. 00:30:38.662 [2024-04-17 06:56:43.176867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.177040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.177067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.662 qpair failed and we were unable to recover it. 00:30:38.662 [2024-04-17 06:56:43.177243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.177407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.177434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.662 qpair failed and we were unable to recover it. 00:30:38.662 [2024-04-17 06:56:43.177619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.177774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.177798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.662 qpair failed and we were unable to recover it. 00:30:38.662 [2024-04-17 06:56:43.178009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.178145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.178186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.662 qpair failed and we were unable to recover it. 00:30:38.662 [2024-04-17 06:56:43.178390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.178555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.178580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.662 qpair failed and we were unable to recover it. 00:30:38.662 [2024-04-17 06:56:43.178756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.178902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.178929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.662 qpair failed and we were unable to recover it. 00:30:38.662 [2024-04-17 06:56:43.179080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.179256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.179281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.662 qpair failed and we were unable to recover it. 00:30:38.662 [2024-04-17 06:56:43.179424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.179590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.179615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.662 qpair failed and we were unable to recover it. 00:30:38.662 [2024-04-17 06:56:43.179788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.179952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.179977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.662 qpair failed and we were unable to recover it. 00:30:38.662 [2024-04-17 06:56:43.180151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.180342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.180367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.662 qpair failed and we were unable to recover it. 00:30:38.662 [2024-04-17 06:56:43.180547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.180728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.180753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.662 qpair failed and we were unable to recover it. 00:30:38.662 [2024-04-17 06:56:43.180937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.181108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.181135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.662 qpair failed and we were unable to recover it. 00:30:38.662 [2024-04-17 06:56:43.181303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.181551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.181579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.662 qpair failed and we were unable to recover it. 00:30:38.662 [2024-04-17 06:56:43.181755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.181938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.181963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.662 qpair failed and we were unable to recover it. 00:30:38.662 [2024-04-17 06:56:43.182120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.182288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.182317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.662 qpair failed and we were unable to recover it. 00:30:38.662 [2024-04-17 06:56:43.182480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.182612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.182640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.662 qpair failed and we were unable to recover it. 00:30:38.662 [2024-04-17 06:56:43.182813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.182989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.183016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.662 qpair failed and we were unable to recover it. 00:30:38.662 [2024-04-17 06:56:43.183187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.183359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.662 [2024-04-17 06:56:43.183386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.662 qpair failed and we were unable to recover it. 00:30:38.663 [2024-04-17 06:56:43.183559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.183757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.183784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.663 qpair failed and we were unable to recover it. 00:30:38.663 [2024-04-17 06:56:43.183925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.184073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.184101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.663 qpair failed and we were unable to recover it. 00:30:38.663 [2024-04-17 06:56:43.184251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.184409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.184437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.663 qpair failed and we were unable to recover it. 00:30:38.663 [2024-04-17 06:56:43.184625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.184792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.184820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.663 qpair failed and we were unable to recover it. 00:30:38.663 [2024-04-17 06:56:43.184975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.185128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.185154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.663 qpair failed and we were unable to recover it. 00:30:38.663 [2024-04-17 06:56:43.185331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.185511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.185537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.663 qpair failed and we were unable to recover it. 00:30:38.663 [2024-04-17 06:56:43.185686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.185879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.185950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.663 qpair failed and we were unable to recover it. 00:30:38.663 [2024-04-17 06:56:43.186095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.186264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.186292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.663 qpair failed and we were unable to recover it. 00:30:38.663 [2024-04-17 06:56:43.186439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.186558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.186583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.663 qpair failed and we were unable to recover it. 00:30:38.663 [2024-04-17 06:56:43.186830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.187005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.187033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.663 qpair failed and we were unable to recover it. 00:30:38.663 [2024-04-17 06:56:43.187217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.187418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.187446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.663 qpair failed and we were unable to recover it. 00:30:38.663 [2024-04-17 06:56:43.187622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.187796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.187823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.663 qpair failed and we were unable to recover it. 00:30:38.663 [2024-04-17 06:56:43.188041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.188215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.188243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.663 qpair failed and we were unable to recover it. 00:30:38.663 [2024-04-17 06:56:43.188432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.188614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.188656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.663 qpair failed and we were unable to recover it. 00:30:38.663 [2024-04-17 06:56:43.188832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.188997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.189024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.663 qpair failed and we were unable to recover it. 00:30:38.663 [2024-04-17 06:56:43.189171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.189364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.189393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.663 qpair failed and we were unable to recover it. 00:30:38.663 [2024-04-17 06:56:43.189605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.189805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.189833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.663 qpair failed and we were unable to recover it. 00:30:38.663 [2024-04-17 06:56:43.189985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.190150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.190188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.663 qpair failed and we were unable to recover it. 00:30:38.663 [2024-04-17 06:56:43.190436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.190622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.190647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.663 qpair failed and we were unable to recover it. 00:30:38.663 [2024-04-17 06:56:43.190797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.190987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.191014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.663 qpair failed and we were unable to recover it. 00:30:38.663 [2024-04-17 06:56:43.191189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.191315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.191340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.663 qpair failed and we were unable to recover it. 00:30:38.663 [2024-04-17 06:56:43.191521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.191665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.191698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.663 qpair failed and we were unable to recover it. 00:30:38.663 [2024-04-17 06:56:43.191898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.192069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.192097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.663 qpair failed and we were unable to recover it. 00:30:38.663 [2024-04-17 06:56:43.192296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.192472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.192500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.663 qpair failed and we were unable to recover it. 00:30:38.663 [2024-04-17 06:56:43.192657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.192786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.192810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.663 qpair failed and we were unable to recover it. 00:30:38.663 [2024-04-17 06:56:43.192971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.193191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.193217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.663 qpair failed and we were unable to recover it. 00:30:38.663 [2024-04-17 06:56:43.193347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.193499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.193525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.663 qpair failed and we were unable to recover it. 00:30:38.663 [2024-04-17 06:56:43.193704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.193873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.193901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.663 qpair failed and we were unable to recover it. 00:30:38.663 [2024-04-17 06:56:43.194081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.663 [2024-04-17 06:56:43.194233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.194258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.664 qpair failed and we were unable to recover it. 00:30:38.664 [2024-04-17 06:56:43.194384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.194505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.194530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.664 qpair failed and we were unable to recover it. 00:30:38.664 [2024-04-17 06:56:43.194677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.194824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.194849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.664 qpair failed and we were unable to recover it. 00:30:38.664 [2024-04-17 06:56:43.194998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.195146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.195184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.664 qpair failed and we were unable to recover it. 00:30:38.664 [2024-04-17 06:56:43.195315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.195474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.195498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.664 qpair failed and we were unable to recover it. 00:30:38.664 [2024-04-17 06:56:43.195689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.195842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.195867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.664 qpair failed and we were unable to recover it. 00:30:38.664 [2024-04-17 06:56:43.196040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.196197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.196222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.664 qpair failed and we were unable to recover it. 00:30:38.664 [2024-04-17 06:56:43.196369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.196547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.196572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.664 qpair failed and we were unable to recover it. 00:30:38.664 [2024-04-17 06:56:43.196758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.196884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.196910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.664 qpair failed and we were unable to recover it. 00:30:38.664 [2024-04-17 06:56:43.197126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.197302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.197331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.664 qpair failed and we were unable to recover it. 00:30:38.664 [2024-04-17 06:56:43.197537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.197692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.197717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.664 qpair failed and we were unable to recover it. 00:30:38.664 [2024-04-17 06:56:43.197838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.197988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.198013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.664 qpair failed and we were unable to recover it. 00:30:38.664 [2024-04-17 06:56:43.198190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.198359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.198387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.664 qpair failed and we were unable to recover it. 00:30:38.664 [2024-04-17 06:56:43.198591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.198741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.198766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.664 qpair failed and we were unable to recover it. 00:30:38.664 [2024-04-17 06:56:43.198924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.199086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.199113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.664 qpair failed and we were unable to recover it. 00:30:38.664 [2024-04-17 06:56:43.199317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.199516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.199544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.664 qpair failed and we were unable to recover it. 00:30:38.664 [2024-04-17 06:56:43.199690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.199857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.199882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.664 qpair failed and we were unable to recover it. 00:30:38.664 [2024-04-17 06:56:43.200098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.200229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.200255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.664 qpair failed and we were unable to recover it. 00:30:38.664 [2024-04-17 06:56:43.200467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.200642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.200669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.664 qpair failed and we were unable to recover it. 00:30:38.664 [2024-04-17 06:56:43.200840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.200979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.201006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.664 qpair failed and we were unable to recover it. 00:30:38.664 [2024-04-17 06:56:43.201152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.201307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.201356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.664 qpair failed and we were unable to recover it. 00:30:38.664 [2024-04-17 06:56:43.201495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.201696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.201721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.664 qpair failed and we were unable to recover it. 00:30:38.664 [2024-04-17 06:56:43.201924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.202105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.202132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.664 qpair failed and we were unable to recover it. 00:30:38.664 [2024-04-17 06:56:43.202302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.202452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.202503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.664 qpair failed and we were unable to recover it. 00:30:38.664 [2024-04-17 06:56:43.202682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.202858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.202886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.664 qpair failed and we were unable to recover it. 00:30:38.664 [2024-04-17 06:56:43.203017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.203189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.203217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.664 qpair failed and we were unable to recover it. 00:30:38.664 [2024-04-17 06:56:43.203420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.203590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.203617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.664 qpair failed and we were unable to recover it. 00:30:38.664 [2024-04-17 06:56:43.203783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.204025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.204052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.664 qpair failed and we were unable to recover it. 00:30:38.664 [2024-04-17 06:56:43.204206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.204341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.204366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.664 qpair failed and we were unable to recover it. 00:30:38.664 [2024-04-17 06:56:43.204563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.664 [2024-04-17 06:56:43.204715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.204744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.665 qpair failed and we were unable to recover it. 00:30:38.665 [2024-04-17 06:56:43.204952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.205075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.205100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.665 qpair failed and we were unable to recover it. 00:30:38.665 [2024-04-17 06:56:43.205256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.205438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.205466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.665 qpair failed and we were unable to recover it. 00:30:38.665 [2024-04-17 06:56:43.205637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.205783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.205808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.665 qpair failed and we were unable to recover it. 00:30:38.665 [2024-04-17 06:56:43.205976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.206138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.206166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.665 qpair failed and we were unable to recover it. 00:30:38.665 [2024-04-17 06:56:43.206323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.206463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.206490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.665 qpair failed and we were unable to recover it. 00:30:38.665 [2024-04-17 06:56:43.206649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.206828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.206855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.665 qpair failed and we were unable to recover it. 00:30:38.665 [2024-04-17 06:56:43.207022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.207167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.207217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.665 qpair failed and we were unable to recover it. 00:30:38.665 [2024-04-17 06:56:43.207391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.207552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.207580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.665 qpair failed and we were unable to recover it. 00:30:38.665 [2024-04-17 06:56:43.207755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.207926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.207956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.665 qpair failed and we were unable to recover it. 00:30:38.665 [2024-04-17 06:56:43.208130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.208276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.208304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.665 qpair failed and we were unable to recover it. 00:30:38.665 [2024-04-17 06:56:43.208476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.208626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.208652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.665 qpair failed and we were unable to recover it. 00:30:38.665 [2024-04-17 06:56:43.208794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.208974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.208999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.665 qpair failed and we were unable to recover it. 00:30:38.665 [2024-04-17 06:56:43.209154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.209317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.209345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.665 qpair failed and we were unable to recover it. 00:30:38.665 [2024-04-17 06:56:43.209486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.209624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.209652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.665 qpair failed and we were unable to recover it. 00:30:38.665 [2024-04-17 06:56:43.209830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.210032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.210064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.665 qpair failed and we were unable to recover it. 00:30:38.665 [2024-04-17 06:56:43.210203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.210352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.210381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.665 qpair failed and we were unable to recover it. 00:30:38.665 [2024-04-17 06:56:43.210538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.210708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.210735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.665 qpair failed and we were unable to recover it. 00:30:38.665 [2024-04-17 06:56:43.210898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.211035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.211062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.665 qpair failed and we were unable to recover it. 00:30:38.665 [2024-04-17 06:56:43.211213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.211359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.211400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.665 qpair failed and we were unable to recover it. 00:30:38.665 [2024-04-17 06:56:43.211564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.211730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.211759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.665 qpair failed and we were unable to recover it. 00:30:38.665 [2024-04-17 06:56:43.212005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.212214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.212240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.665 qpair failed and we were unable to recover it. 00:30:38.665 [2024-04-17 06:56:43.212399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.212620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.212645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.665 qpair failed and we were unable to recover it. 00:30:38.665 [2024-04-17 06:56:43.212826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.212972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.212996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.665 qpair failed and we were unable to recover it. 00:30:38.665 [2024-04-17 06:56:43.213203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.665 [2024-04-17 06:56:43.213443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.213468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.666 qpair failed and we were unable to recover it. 00:30:38.666 [2024-04-17 06:56:43.213646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.213810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.213842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.666 qpair failed and we were unable to recover it. 00:30:38.666 [2024-04-17 06:56:43.213992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.214159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.214194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.666 qpair failed and we were unable to recover it. 00:30:38.666 [2024-04-17 06:56:43.214391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.214546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.214586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.666 qpair failed and we were unable to recover it. 00:30:38.666 [2024-04-17 06:56:43.214735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.214917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.214957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.666 qpair failed and we were unable to recover it. 00:30:38.666 [2024-04-17 06:56:43.215166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.215365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.215393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.666 qpair failed and we were unable to recover it. 00:30:38.666 [2024-04-17 06:56:43.215577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.215726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.215751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.666 qpair failed and we were unable to recover it. 00:30:38.666 [2024-04-17 06:56:43.215875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.216027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.216052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.666 qpair failed and we were unable to recover it. 00:30:38.666 [2024-04-17 06:56:43.216236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.216361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.216385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.666 qpair failed and we were unable to recover it. 00:30:38.666 [2024-04-17 06:56:43.216586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.216814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.216848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.666 qpair failed and we were unable to recover it. 00:30:38.666 [2024-04-17 06:56:43.217000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.217170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.217206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.666 qpair failed and we were unable to recover it. 00:30:38.666 [2024-04-17 06:56:43.217387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.217599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.217627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.666 qpair failed and we were unable to recover it. 00:30:38.666 [2024-04-17 06:56:43.217795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.217965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.217990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.666 qpair failed and we were unable to recover it. 00:30:38.666 [2024-04-17 06:56:43.218146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.218316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.218344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.666 qpair failed and we were unable to recover it. 00:30:38.666 [2024-04-17 06:56:43.218505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.218706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.218734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.666 qpair failed and we were unable to recover it. 00:30:38.666 [2024-04-17 06:56:43.218936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.219118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.219160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.666 qpair failed and we were unable to recover it. 00:30:38.666 [2024-04-17 06:56:43.219351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.219514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.219542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.666 qpair failed and we were unable to recover it. 00:30:38.666 [2024-04-17 06:56:43.219749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.219932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.219957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.666 qpair failed and we were unable to recover it. 00:30:38.666 [2024-04-17 06:56:43.220094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.220269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.220299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.666 qpair failed and we were unable to recover it. 00:30:38.666 [2024-04-17 06:56:43.220510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.220694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.220723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.666 qpair failed and we were unable to recover it. 00:30:38.666 [2024-04-17 06:56:43.220908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.221067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.221095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.666 qpair failed and we were unable to recover it. 00:30:38.666 [2024-04-17 06:56:43.221250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.221401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.221431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.666 qpair failed and we were unable to recover it. 00:30:38.666 [2024-04-17 06:56:43.221614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.221814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.221838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.666 qpair failed and we were unable to recover it. 00:30:38.666 [2024-04-17 06:56:43.222019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.222195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.222234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.666 qpair failed and we were unable to recover it. 00:30:38.666 [2024-04-17 06:56:43.222433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.222603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.222631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.666 qpair failed and we were unable to recover it. 00:30:38.666 [2024-04-17 06:56:43.222802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.222984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.223014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.666 qpair failed and we were unable to recover it. 00:30:38.666 [2024-04-17 06:56:43.223230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.223360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.223386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.666 qpair failed and we were unable to recover it. 00:30:38.666 [2024-04-17 06:56:43.223572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.223753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.223781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.666 qpair failed and we were unable to recover it. 00:30:38.666 [2024-04-17 06:56:43.224120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.224372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.666 [2024-04-17 06:56:43.224399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.666 qpair failed and we were unable to recover it. 00:30:38.666 [2024-04-17 06:56:43.224575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.945 [2024-04-17 06:56:43.224775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.945 [2024-04-17 06:56:43.224799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.945 qpair failed and we were unable to recover it. 00:30:38.945 [2024-04-17 06:56:43.224957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.945 [2024-04-17 06:56:43.225142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.945 [2024-04-17 06:56:43.225170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.945 qpair failed and we were unable to recover it. 00:30:38.945 [2024-04-17 06:56:43.225341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.945 [2024-04-17 06:56:43.225540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.945 [2024-04-17 06:56:43.225573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.945 qpair failed and we were unable to recover it. 00:30:38.945 [2024-04-17 06:56:43.225776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.945 [2024-04-17 06:56:43.225930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.945 [2024-04-17 06:56:43.225966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.945 qpair failed and we were unable to recover it. 00:30:38.945 [2024-04-17 06:56:43.226112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.945 [2024-04-17 06:56:43.226322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.945 [2024-04-17 06:56:43.226362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.945 qpair failed and we were unable to recover it. 00:30:38.945 [2024-04-17 06:56:43.226514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.945 [2024-04-17 06:56:43.226658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.945 [2024-04-17 06:56:43.226694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.945 qpair failed and we were unable to recover it. 00:30:38.945 [2024-04-17 06:56:43.226871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.945 [2024-04-17 06:56:43.227040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.945 [2024-04-17 06:56:43.227067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.945 qpair failed and we were unable to recover it. 00:30:38.945 [2024-04-17 06:56:43.227218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.945 [2024-04-17 06:56:43.227351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.945 [2024-04-17 06:56:43.227376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.945 qpair failed and we were unable to recover it. 00:30:38.945 [2024-04-17 06:56:43.227504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.945 [2024-04-17 06:56:43.227661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.945 [2024-04-17 06:56:43.227688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.945 qpair failed and we were unable to recover it. 00:30:38.945 [2024-04-17 06:56:43.227873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.945 [2024-04-17 06:56:43.228013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.945 [2024-04-17 06:56:43.228041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.945 qpair failed and we were unable to recover it. 00:30:38.945 [2024-04-17 06:56:43.228222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.945 [2024-04-17 06:56:43.228391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.945 [2024-04-17 06:56:43.228417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.945 qpair failed and we were unable to recover it. 00:30:38.945 [2024-04-17 06:56:43.228587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.945 [2024-04-17 06:56:43.228756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.945 [2024-04-17 06:56:43.228782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.946 qpair failed and we were unable to recover it. 00:30:38.946 [2024-04-17 06:56:43.228938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.229120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.229163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.946 qpair failed and we were unable to recover it. 00:30:38.946 [2024-04-17 06:56:43.229339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.229469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.229503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.946 qpair failed and we were unable to recover it. 00:30:38.946 [2024-04-17 06:56:43.229684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.229809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.229834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.946 qpair failed and we were unable to recover it. 00:30:38.946 [2024-04-17 06:56:43.229988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.230156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.230199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.946 qpair failed and we were unable to recover it. 00:30:38.946 [2024-04-17 06:56:43.230372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.230566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.230590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.946 qpair failed and we were unable to recover it. 00:30:38.946 [2024-04-17 06:56:43.230768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.230994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.231019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.946 qpair failed and we were unable to recover it. 00:30:38.946 [2024-04-17 06:56:43.231196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.231388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.231417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.946 qpair failed and we were unable to recover it. 00:30:38.946 [2024-04-17 06:56:43.231667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.231796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.231821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.946 qpair failed and we were unable to recover it. 00:30:38.946 [2024-04-17 06:56:43.231968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.232107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.232136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.946 qpair failed and we were unable to recover it. 00:30:38.946 [2024-04-17 06:56:43.232338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.232504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.232544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.946 qpair failed and we were unable to recover it. 00:30:38.946 [2024-04-17 06:56:43.232727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.232932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.232959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.946 qpair failed and we were unable to recover it. 00:30:38.946 [2024-04-17 06:56:43.233121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.233302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.233332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.946 qpair failed and we were unable to recover it. 00:30:38.946 [2024-04-17 06:56:43.233494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.233667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.233695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.946 qpair failed and we were unable to recover it. 00:30:38.946 [2024-04-17 06:56:43.233862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.234057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.234084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.946 qpair failed and we were unable to recover it. 00:30:38.946 [2024-04-17 06:56:43.234283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.234488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.234515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.946 qpair failed and we were unable to recover it. 00:30:38.946 [2024-04-17 06:56:43.234715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.234863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.234891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.946 qpair failed and we were unable to recover it. 00:30:38.946 [2024-04-17 06:56:43.235067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.235279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.235304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.946 qpair failed and we were unable to recover it. 00:30:38.946 [2024-04-17 06:56:43.235430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.235602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.235630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.946 qpair failed and we were unable to recover it. 00:30:38.946 [2024-04-17 06:56:43.235799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.236001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.236029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.946 qpair failed and we were unable to recover it. 00:30:38.946 [2024-04-17 06:56:43.236194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.236326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.236354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.946 qpair failed and we were unable to recover it. 00:30:38.946 [2024-04-17 06:56:43.236537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.236709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.236736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.946 qpair failed and we were unable to recover it. 00:30:38.946 [2024-04-17 06:56:43.236931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.237097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.237125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.946 qpair failed and we were unable to recover it. 00:30:38.946 [2024-04-17 06:56:43.237349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.237520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.237548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.946 qpair failed and we were unable to recover it. 00:30:38.946 [2024-04-17 06:56:43.237722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.237918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.237946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.946 qpair failed and we were unable to recover it. 00:30:38.946 [2024-04-17 06:56:43.238088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.238281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.238307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.946 qpair failed and we were unable to recover it. 00:30:38.946 [2024-04-17 06:56:43.238487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.238630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.238670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.946 qpair failed and we were unable to recover it. 00:30:38.946 [2024-04-17 06:56:43.238872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.239024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.239049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.946 qpair failed and we were unable to recover it. 00:30:38.946 [2024-04-17 06:56:43.239253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.239457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.239495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.946 qpair failed and we were unable to recover it. 00:30:38.946 [2024-04-17 06:56:43.239630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.946 [2024-04-17 06:56:43.239774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.239802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.947 qpair failed and we were unable to recover it. 00:30:38.947 [2024-04-17 06:56:43.239997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.240160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.240196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.947 qpair failed and we were unable to recover it. 00:30:38.947 [2024-04-17 06:56:43.240337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.240494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.240535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.947 qpair failed and we were unable to recover it. 00:30:38.947 [2024-04-17 06:56:43.240703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.240883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.240910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.947 qpair failed and we were unable to recover it. 00:30:38.947 [2024-04-17 06:56:43.241079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.241227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.241256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.947 qpair failed and we were unable to recover it. 00:30:38.947 [2024-04-17 06:56:43.241425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.241599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.241626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.947 qpair failed and we were unable to recover it. 00:30:38.947 [2024-04-17 06:56:43.241810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.241960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.242002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.947 qpair failed and we were unable to recover it. 00:30:38.947 [2024-04-17 06:56:43.242153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.242362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.242390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.947 qpair failed and we were unable to recover it. 00:30:38.947 [2024-04-17 06:56:43.242566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.242775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.242802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.947 qpair failed and we were unable to recover it. 00:30:38.947 [2024-04-17 06:56:43.242976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.243121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.243149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.947 qpair failed and we were unable to recover it. 00:30:38.947 [2024-04-17 06:56:43.243312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.243474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.243500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.947 qpair failed and we were unable to recover it. 00:30:38.947 [2024-04-17 06:56:43.243634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.243822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.243864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.947 qpair failed and we were unable to recover it. 00:30:38.947 [2024-04-17 06:56:43.244048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.244233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.244276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.947 qpair failed and we were unable to recover it. 00:30:38.947 [2024-04-17 06:56:43.244479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.244662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.244687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.947 qpair failed and we were unable to recover it. 00:30:38.947 [2024-04-17 06:56:43.244843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.245050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.245078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.947 qpair failed and we were unable to recover it. 00:30:38.947 [2024-04-17 06:56:43.245288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.245442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.245491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.947 qpair failed and we were unable to recover it. 00:30:38.947 [2024-04-17 06:56:43.245665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.245867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.245892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.947 qpair failed and we were unable to recover it. 00:30:38.947 [2024-04-17 06:56:43.246050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.246168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.246198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.947 qpair failed and we were unable to recover it. 00:30:38.947 [2024-04-17 06:56:43.246348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.246469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.246494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.947 qpair failed and we were unable to recover it. 00:30:38.947 [2024-04-17 06:56:43.246657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.246841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.246867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.947 qpair failed and we were unable to recover it. 00:30:38.947 [2024-04-17 06:56:43.247001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.247188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.247216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.947 qpair failed and we were unable to recover it. 00:30:38.947 [2024-04-17 06:56:43.247417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.247545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.247570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.947 qpair failed and we were unable to recover it. 00:30:38.947 [2024-04-17 06:56:43.247695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.247876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.247901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.947 qpair failed and we were unable to recover it. 00:30:38.947 [2024-04-17 06:56:43.248121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.248288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.248313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.947 qpair failed and we were unable to recover it. 00:30:38.947 [2024-04-17 06:56:43.248443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.248604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.248629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.947 qpair failed and we were unable to recover it. 00:30:38.947 [2024-04-17 06:56:43.248847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.249057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.249084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.947 qpair failed and we were unable to recover it. 00:30:38.947 [2024-04-17 06:56:43.249227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.249360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.249386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.947 qpair failed and we were unable to recover it. 00:30:38.947 [2024-04-17 06:56:43.249525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.249782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.249810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.947 qpair failed and we were unable to recover it. 00:30:38.947 [2024-04-17 06:56:43.249968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.250171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.250202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.947 qpair failed and we were unable to recover it. 00:30:38.947 [2024-04-17 06:56:43.250329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.947 [2024-04-17 06:56:43.250514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.250540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.948 qpair failed and we were unable to recover it. 00:30:38.948 [2024-04-17 06:56:43.250670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.250827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.250868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.948 qpair failed and we were unable to recover it. 00:30:38.948 [2024-04-17 06:56:43.251047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.251249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.251277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.948 qpair failed and we were unable to recover it. 00:30:38.948 [2024-04-17 06:56:43.251454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.251617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.251657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.948 qpair failed and we were unable to recover it. 00:30:38.948 [2024-04-17 06:56:43.251842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.252025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.252066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.948 qpair failed and we were unable to recover it. 00:30:38.948 [2024-04-17 06:56:43.252234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.252371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.252400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.948 qpair failed and we were unable to recover it. 00:30:38.948 [2024-04-17 06:56:43.252536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.252715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.252743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.948 qpair failed and we were unable to recover it. 00:30:38.948 [2024-04-17 06:56:43.252950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.253155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.253198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.948 qpair failed and we were unable to recover it. 00:30:38.948 [2024-04-17 06:56:43.253414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.253620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.253648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.948 qpair failed and we were unable to recover it. 00:30:38.948 [2024-04-17 06:56:43.253802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.253995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.254023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.948 qpair failed and we were unable to recover it. 00:30:38.948 [2024-04-17 06:56:43.254160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.254301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.254329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.948 qpair failed and we were unable to recover it. 00:30:38.948 [2024-04-17 06:56:43.254508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.254689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.254714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.948 qpair failed and we were unable to recover it. 00:30:38.948 [2024-04-17 06:56:43.254877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.255010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.255034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.948 qpair failed and we were unable to recover it. 00:30:38.948 [2024-04-17 06:56:43.255194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.255361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.255389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.948 qpair failed and we were unable to recover it. 00:30:38.948 [2024-04-17 06:56:43.255589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.255782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.255810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.948 qpair failed and we were unable to recover it. 00:30:38.948 [2024-04-17 06:56:43.255985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.256151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.256188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.948 qpair failed and we were unable to recover it. 00:30:38.948 [2024-04-17 06:56:43.256342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.256502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.256529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.948 qpair failed and we were unable to recover it. 00:30:38.948 [2024-04-17 06:56:43.256711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.256880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.256908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.948 qpair failed and we were unable to recover it. 00:30:38.948 [2024-04-17 06:56:43.257107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.257294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.257322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.948 qpair failed and we were unable to recover it. 00:30:38.948 [2024-04-17 06:56:43.257496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.257677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.257720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.948 qpair failed and we were unable to recover it. 00:30:38.948 [2024-04-17 06:56:43.257851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.257987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.258015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.948 qpair failed and we were unable to recover it. 00:30:38.948 [2024-04-17 06:56:43.258179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.258370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.258396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.948 qpair failed and we were unable to recover it. 00:30:38.948 [2024-04-17 06:56:43.258573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.258744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.258771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.948 qpair failed and we were unable to recover it. 00:30:38.948 [2024-04-17 06:56:43.258902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.259084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.259111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.948 qpair failed and we were unable to recover it. 00:30:38.948 [2024-04-17 06:56:43.259262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.259422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.259447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.948 qpair failed and we were unable to recover it. 00:30:38.948 [2024-04-17 06:56:43.259570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.259724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.259765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.948 qpair failed and we were unable to recover it. 00:30:38.948 [2024-04-17 06:56:43.259950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.260118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.260143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.948 qpair failed and we were unable to recover it. 00:30:38.948 [2024-04-17 06:56:43.260326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.260465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.260493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.948 qpair failed and we were unable to recover it. 00:30:38.948 [2024-04-17 06:56:43.260692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.260939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.260997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.948 qpair failed and we were unable to recover it. 00:30:38.948 [2024-04-17 06:56:43.261218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.948 [2024-04-17 06:56:43.261421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.261449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.949 qpair failed and we were unable to recover it. 00:30:38.949 [2024-04-17 06:56:43.261602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.261726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.261752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.949 qpair failed and we were unable to recover it. 00:30:38.949 [2024-04-17 06:56:43.261960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.262165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.262199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.949 qpair failed and we were unable to recover it. 00:30:38.949 [2024-04-17 06:56:43.262386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.262545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.262570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.949 qpair failed and we were unable to recover it. 00:30:38.949 [2024-04-17 06:56:43.262721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.262850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.262875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.949 qpair failed and we were unable to recover it. 00:30:38.949 [2024-04-17 06:56:43.263022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.263153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.263196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.949 qpair failed and we were unable to recover it. 00:30:38.949 [2024-04-17 06:56:43.263362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.263498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.263525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.949 qpair failed and we were unable to recover it. 00:30:38.949 [2024-04-17 06:56:43.263674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.263842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.263870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.949 qpair failed and we were unable to recover it. 00:30:38.949 [2024-04-17 06:56:43.264074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.264256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.264285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.949 qpair failed and we were unable to recover it. 00:30:38.949 [2024-04-17 06:56:43.264458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.264632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.264660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.949 qpair failed and we were unable to recover it. 00:30:38.949 [2024-04-17 06:56:43.264798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.264976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.265004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.949 qpair failed and we were unable to recover it. 00:30:38.949 [2024-04-17 06:56:43.265159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.265292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.265320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.949 qpair failed and we were unable to recover it. 00:30:38.949 [2024-04-17 06:56:43.265476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.265631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.265658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.949 qpair failed and we were unable to recover it. 00:30:38.949 [2024-04-17 06:56:43.265815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.266014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.266039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.949 qpair failed and we were unable to recover it. 00:30:38.949 [2024-04-17 06:56:43.266171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.266358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.266386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.949 qpair failed and we were unable to recover it. 00:30:38.949 [2024-04-17 06:56:43.266529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.266693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.266721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.949 qpair failed and we were unable to recover it. 00:30:38.949 [2024-04-17 06:56:43.266916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.267085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.267113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.949 qpair failed and we were unable to recover it. 00:30:38.949 [2024-04-17 06:56:43.267301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.267435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.267460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.949 qpair failed and we were unable to recover it. 00:30:38.949 [2024-04-17 06:56:43.267649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.267773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.267799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.949 qpair failed and we were unable to recover it. 00:30:38.949 [2024-04-17 06:56:43.267953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.268145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.268170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.949 qpair failed and we were unable to recover it. 00:30:38.949 [2024-04-17 06:56:43.268341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.268474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.268500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.949 qpair failed and we were unable to recover it. 00:30:38.949 [2024-04-17 06:56:43.268629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.268808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.268833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.949 qpair failed and we were unable to recover it. 00:30:38.949 [2024-04-17 06:56:43.269060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.269262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.269292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.949 qpair failed and we were unable to recover it. 00:30:38.949 [2024-04-17 06:56:43.269476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.269650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.269678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.949 qpair failed and we were unable to recover it. 00:30:38.949 [2024-04-17 06:56:43.269824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.269954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.269979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.949 qpair failed and we were unable to recover it. 00:30:38.949 [2024-04-17 06:56:43.270163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.270353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.270394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.949 qpair failed and we were unable to recover it. 00:30:38.949 [2024-04-17 06:56:43.270590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.270749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.270776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.949 qpair failed and we were unable to recover it. 00:30:38.949 [2024-04-17 06:56:43.270935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.271088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.271121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.949 qpair failed and we were unable to recover it. 00:30:38.949 [2024-04-17 06:56:43.271287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.271426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.271451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.949 qpair failed and we were unable to recover it. 00:30:38.949 [2024-04-17 06:56:43.271655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.949 [2024-04-17 06:56:43.271858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.271886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.950 qpair failed and we were unable to recover it. 00:30:38.950 [2024-04-17 06:56:43.272067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.272240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.272265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.950 qpair failed and we were unable to recover it. 00:30:38.950 [2024-04-17 06:56:43.272451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.272585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.272610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.950 qpair failed and we were unable to recover it. 00:30:38.950 [2024-04-17 06:56:43.272765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.272932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.272961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.950 qpair failed and we were unable to recover it. 00:30:38.950 [2024-04-17 06:56:43.273106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.273293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.273321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.950 qpair failed and we were unable to recover it. 00:30:38.950 [2024-04-17 06:56:43.273483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.273635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.273662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.950 qpair failed and we were unable to recover it. 00:30:38.950 [2024-04-17 06:56:43.273830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.273967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.273996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.950 qpair failed and we were unable to recover it. 00:30:38.950 [2024-04-17 06:56:43.274156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.274345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.274370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.950 qpair failed and we were unable to recover it. 00:30:38.950 [2024-04-17 06:56:43.274550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.274716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.274743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.950 qpair failed and we were unable to recover it. 00:30:38.950 [2024-04-17 06:56:43.274884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.275022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.275050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.950 qpair failed and we were unable to recover it. 00:30:38.950 [2024-04-17 06:56:43.275230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.275348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.275373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.950 qpair failed and we were unable to recover it. 00:30:38.950 [2024-04-17 06:56:43.275543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.275717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.275745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.950 qpair failed and we were unable to recover it. 00:30:38.950 [2024-04-17 06:56:43.275897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.276030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.276055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.950 qpair failed and we were unable to recover it. 00:30:38.950 [2024-04-17 06:56:43.276220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.276417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.276449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.950 qpair failed and we were unable to recover it. 00:30:38.950 [2024-04-17 06:56:43.276639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.276919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.276969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.950 qpair failed and we were unable to recover it. 00:30:38.950 [2024-04-17 06:56:43.277210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.277338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.277369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.950 qpair failed and we were unable to recover it. 00:30:38.950 [2024-04-17 06:56:43.277594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.277784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.277828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.950 qpair failed and we were unable to recover it. 00:30:38.950 [2024-04-17 06:56:43.278003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.278203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.278232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.950 qpair failed and we were unable to recover it. 00:30:38.950 [2024-04-17 06:56:43.278378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.278531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.278556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.950 qpair failed and we were unable to recover it. 00:30:38.950 [2024-04-17 06:56:43.278715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.278921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.278948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.950 qpair failed and we were unable to recover it. 00:30:38.950 [2024-04-17 06:56:43.279086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.279257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.279286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.950 qpair failed and we were unable to recover it. 00:30:38.950 [2024-04-17 06:56:43.279435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.279583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.279612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.950 qpair failed and we were unable to recover it. 00:30:38.950 [2024-04-17 06:56:43.279789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.279932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.279961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.950 qpair failed and we were unable to recover it. 00:30:38.950 [2024-04-17 06:56:43.280132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.280277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.950 [2024-04-17 06:56:43.280302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.950 qpair failed and we were unable to recover it. 00:30:38.950 [2024-04-17 06:56:43.280425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.280547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.280572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.951 qpair failed and we were unable to recover it. 00:30:38.951 [2024-04-17 06:56:43.280706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.280858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.280886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.951 qpair failed and we were unable to recover it. 00:30:38.951 [2024-04-17 06:56:43.281049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.281220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.281249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.951 qpair failed and we were unable to recover it. 00:30:38.951 [2024-04-17 06:56:43.281426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.281553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.281579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.951 qpair failed and we were unable to recover it. 00:30:38.951 [2024-04-17 06:56:43.281760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.281920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.281948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.951 qpair failed and we were unable to recover it. 00:30:38.951 [2024-04-17 06:56:43.282150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.282336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.282364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.951 qpair failed and we were unable to recover it. 00:30:38.951 [2024-04-17 06:56:43.282506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.282676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.282704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.951 qpair failed and we were unable to recover it. 00:30:38.951 [2024-04-17 06:56:43.282849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.283009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.283050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.951 qpair failed and we were unable to recover it. 00:30:38.951 [2024-04-17 06:56:43.283219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.283356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.283383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.951 qpair failed and we were unable to recover it. 00:30:38.951 [2024-04-17 06:56:43.283571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.283743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.283771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.951 qpair failed and we were unable to recover it. 00:30:38.951 [2024-04-17 06:56:43.283984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.284153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.284189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.951 qpair failed and we were unable to recover it. 00:30:38.951 [2024-04-17 06:56:43.284345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.284478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.284503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.951 qpair failed and we were unable to recover it. 00:30:38.951 [2024-04-17 06:56:43.284693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.284843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.284868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.951 qpair failed and we were unable to recover it. 00:30:38.951 [2024-04-17 06:56:43.285012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.285189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.285217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.951 qpair failed and we were unable to recover it. 00:30:38.951 [2024-04-17 06:56:43.285358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.285546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.285571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.951 qpair failed and we were unable to recover it. 00:30:38.951 [2024-04-17 06:56:43.285698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.285858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.285884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.951 qpair failed and we were unable to recover it. 00:30:38.951 [2024-04-17 06:56:43.286097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.286255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.286298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.951 qpair failed and we were unable to recover it. 00:30:38.951 [2024-04-17 06:56:43.286440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.286620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.286645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.951 qpair failed and we were unable to recover it. 00:30:38.951 [2024-04-17 06:56:43.286769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.286920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.286944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.951 qpair failed and we were unable to recover it. 00:30:38.951 [2024-04-17 06:56:43.287150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.287327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.287352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.951 qpair failed and we were unable to recover it. 00:30:38.951 [2024-04-17 06:56:43.287503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.287677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.287702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.951 qpair failed and we were unable to recover it. 00:30:38.951 [2024-04-17 06:56:43.287835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.287984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.288008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.951 qpair failed and we were unable to recover it. 00:30:38.951 [2024-04-17 06:56:43.288187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.288353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.288395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.951 qpair failed and we were unable to recover it. 00:30:38.951 [2024-04-17 06:56:43.288578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.288740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.288780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.951 qpair failed and we were unable to recover it. 00:30:38.951 [2024-04-17 06:56:43.288923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.289105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.289130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.951 qpair failed and we were unable to recover it. 00:30:38.951 [2024-04-17 06:56:43.289305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.289476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.289508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.951 qpair failed and we were unable to recover it. 00:30:38.951 [2024-04-17 06:56:43.289654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.289833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.289857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.951 qpair failed and we were unable to recover it. 00:30:38.951 [2024-04-17 06:56:43.290029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.290149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.290190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.951 qpair failed and we were unable to recover it. 00:30:38.951 [2024-04-17 06:56:43.290316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.290467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.951 [2024-04-17 06:56:43.290494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.951 qpair failed and we were unable to recover it. 00:30:38.952 [2024-04-17 06:56:43.290613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.290782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.290806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.952 qpair failed and we were unable to recover it. 00:30:38.952 [2024-04-17 06:56:43.291005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.291181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.291207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.952 qpair failed and we were unable to recover it. 00:30:38.952 [2024-04-17 06:56:43.291357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.291493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.291518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.952 qpair failed and we were unable to recover it. 00:30:38.952 [2024-04-17 06:56:43.291676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.291879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.291925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.952 qpair failed and we were unable to recover it. 00:30:38.952 [2024-04-17 06:56:43.292115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.292275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.292301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.952 qpair failed and we were unable to recover it. 00:30:38.952 [2024-04-17 06:56:43.292467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.292665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.292692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.952 qpair failed and we were unable to recover it. 00:30:38.952 [2024-04-17 06:56:43.292874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.293043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.293072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.952 qpair failed and we were unable to recover it. 00:30:38.952 [2024-04-17 06:56:43.293210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.293374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.293403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.952 qpair failed and we were unable to recover it. 00:30:38.952 [2024-04-17 06:56:43.293571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.293769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.293814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.952 qpair failed and we were unable to recover it. 00:30:38.952 [2024-04-17 06:56:43.294024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.294210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.294235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.952 qpair failed and we were unable to recover it. 00:30:38.952 [2024-04-17 06:56:43.294415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.294590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.294618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.952 qpair failed and we were unable to recover it. 00:30:38.952 [2024-04-17 06:56:43.294787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.294922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.294949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.952 qpair failed and we were unable to recover it. 00:30:38.952 [2024-04-17 06:56:43.295117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.295261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.295289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.952 qpair failed and we were unable to recover it. 00:30:38.952 [2024-04-17 06:56:43.295470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.295614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.295641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.952 qpair failed and we were unable to recover it. 00:30:38.952 [2024-04-17 06:56:43.295821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.296019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.296047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.952 qpair failed and we were unable to recover it. 00:30:38.952 [2024-04-17 06:56:43.296209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.296379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.296408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.952 qpair failed and we were unable to recover it. 00:30:38.952 [2024-04-17 06:56:43.296585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.296754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.296779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.952 qpair failed and we were unable to recover it. 00:30:38.952 [2024-04-17 06:56:43.296907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.297058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.297083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.952 qpair failed and we were unable to recover it. 00:30:38.952 [2024-04-17 06:56:43.297248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.297374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.297399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.952 qpair failed and we were unable to recover it. 00:30:38.952 [2024-04-17 06:56:43.297633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.297820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.297849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.952 qpair failed and we were unable to recover it. 00:30:38.952 [2024-04-17 06:56:43.298036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.298212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.298241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.952 qpair failed and we were unable to recover it. 00:30:38.952 [2024-04-17 06:56:43.298417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.298591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.298618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.952 qpair failed and we were unable to recover it. 00:30:38.952 [2024-04-17 06:56:43.298770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.299021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.299048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.952 qpair failed and we were unable to recover it. 00:30:38.952 [2024-04-17 06:56:43.299228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.299376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.299401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.952 qpair failed and we were unable to recover it. 00:30:38.952 [2024-04-17 06:56:43.299592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.299851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.299879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.952 qpair failed and we were unable to recover it. 00:30:38.952 [2024-04-17 06:56:43.300017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.300196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.300225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.952 qpair failed and we were unable to recover it. 00:30:38.952 [2024-04-17 06:56:43.300466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.300607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.300635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.952 qpair failed and we were unable to recover it. 00:30:38.952 [2024-04-17 06:56:43.300813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.300978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.301006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.952 qpair failed and we were unable to recover it. 00:30:38.952 [2024-04-17 06:56:43.301146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.301327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.952 [2024-04-17 06:56:43.301355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.952 qpair failed and we were unable to recover it. 00:30:38.953 [2024-04-17 06:56:43.301552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.301720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.301748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.953 qpair failed and we were unable to recover it. 00:30:38.953 [2024-04-17 06:56:43.301962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.302207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.302235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.953 qpair failed and we were unable to recover it. 00:30:38.953 [2024-04-17 06:56:43.302412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.302583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.302611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.953 qpair failed and we were unable to recover it. 00:30:38.953 [2024-04-17 06:56:43.302814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.303008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.303055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.953 qpair failed and we were unable to recover it. 00:30:38.953 [2024-04-17 06:56:43.303228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.303436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.303462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.953 qpair failed and we were unable to recover it. 00:30:38.953 [2024-04-17 06:56:43.303583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.303740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.303765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.953 qpair failed and we were unable to recover it. 00:30:38.953 [2024-04-17 06:56:43.303933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.304057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.304082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.953 qpair failed and we were unable to recover it. 00:30:38.953 [2024-04-17 06:56:43.304310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.304471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.304499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.953 qpair failed and we were unable to recover it. 00:30:38.953 [2024-04-17 06:56:43.304670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.304811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.304836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.953 qpair failed and we were unable to recover it. 00:30:38.953 [2024-04-17 06:56:43.305024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.305173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.305222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.953 qpair failed and we were unable to recover it. 00:30:38.953 [2024-04-17 06:56:43.305398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.305563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.305590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.953 qpair failed and we were unable to recover it. 00:30:38.953 [2024-04-17 06:56:43.305763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.306000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.306025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.953 qpair failed and we were unable to recover it. 00:30:38.953 [2024-04-17 06:56:43.306151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.306361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.306389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.953 qpair failed and we were unable to recover it. 00:30:38.953 [2024-04-17 06:56:43.306554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.306723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.306748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.953 qpair failed and we were unable to recover it. 00:30:38.953 [2024-04-17 06:56:43.306913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.307085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.307112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.953 qpair failed and we were unable to recover it. 00:30:38.953 [2024-04-17 06:56:43.307324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.307502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.307527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.953 qpair failed and we were unable to recover it. 00:30:38.953 [2024-04-17 06:56:43.307684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.307877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.307901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.953 qpair failed and we were unable to recover it. 00:30:38.953 [2024-04-17 06:56:43.308050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.308207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.308249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.953 qpair failed and we were unable to recover it. 00:30:38.953 [2024-04-17 06:56:43.308402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.308605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.308633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.953 qpair failed and we were unable to recover it. 00:30:38.953 [2024-04-17 06:56:43.308789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.308988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.309013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.953 qpair failed and we were unable to recover it. 00:30:38.953 [2024-04-17 06:56:43.309144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.309275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.309301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.953 qpair failed and we were unable to recover it. 00:30:38.953 [2024-04-17 06:56:43.309483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.309641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.309682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.953 qpair failed and we were unable to recover it. 00:30:38.953 [2024-04-17 06:56:43.309900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.310057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.310085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.953 qpair failed and we were unable to recover it. 00:30:38.953 [2024-04-17 06:56:43.310283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.310456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.310481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.953 qpair failed and we were unable to recover it. 00:30:38.953 [2024-04-17 06:56:43.310663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.310845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.310873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.953 qpair failed and we were unable to recover it. 00:30:38.953 [2024-04-17 06:56:43.311054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.311190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.311217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.953 qpair failed and we were unable to recover it. 00:30:38.953 [2024-04-17 06:56:43.311432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.311626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.311654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.953 qpair failed and we were unable to recover it. 00:30:38.953 [2024-04-17 06:56:43.311792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.311937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.311965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.953 qpair failed and we were unable to recover it. 00:30:38.953 [2024-04-17 06:56:43.312149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.953 [2024-04-17 06:56:43.312304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.312350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.954 qpair failed and we were unable to recover it. 00:30:38.954 [2024-04-17 06:56:43.312551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.312684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.312709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.954 qpair failed and we were unable to recover it. 00:30:38.954 [2024-04-17 06:56:43.312874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.313086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.313110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.954 qpair failed and we were unable to recover it. 00:30:38.954 [2024-04-17 06:56:43.313341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.313549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.313574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.954 qpair failed and we were unable to recover it. 00:30:38.954 [2024-04-17 06:56:43.313752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.313937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.313962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.954 qpair failed and we were unable to recover it. 00:30:38.954 [2024-04-17 06:56:43.314133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.314281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.314308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.954 qpair failed and we were unable to recover it. 00:30:38.954 [2024-04-17 06:56:43.314547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.314669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.314695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.954 qpair failed and we were unable to recover it. 00:30:38.954 [2024-04-17 06:56:43.314842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.314976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.315001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.954 qpair failed and we were unable to recover it. 00:30:38.954 [2024-04-17 06:56:43.315133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.315264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.315290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.954 qpair failed and we were unable to recover it. 00:30:38.954 [2024-04-17 06:56:43.315446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.315650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.315678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.954 qpair failed and we were unable to recover it. 00:30:38.954 [2024-04-17 06:56:43.315841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.316011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.316039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.954 qpair failed and we were unable to recover it. 00:30:38.954 [2024-04-17 06:56:43.316249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.316437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.316464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.954 qpair failed and we were unable to recover it. 00:30:38.954 [2024-04-17 06:56:43.316638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.316823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.316867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.954 qpair failed and we were unable to recover it. 00:30:38.954 [2024-04-17 06:56:43.317012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.317159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.317195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.954 qpair failed and we were unable to recover it. 00:30:38.954 [2024-04-17 06:56:43.317363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.317564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.317591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.954 qpair failed and we were unable to recover it. 00:30:38.954 [2024-04-17 06:56:43.317788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.317924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.317953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.954 qpair failed and we were unable to recover it. 00:30:38.954 [2024-04-17 06:56:43.318151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.318335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.318361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.954 qpair failed and we were unable to recover it. 00:30:38.954 [2024-04-17 06:56:43.318542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.318743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.318770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.954 qpair failed and we were unable to recover it. 00:30:38.954 [2024-04-17 06:56:43.318951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.319075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.319100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.954 qpair failed and we were unable to recover it. 00:30:38.954 [2024-04-17 06:56:43.319283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.319416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.954 [2024-04-17 06:56:43.319443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.954 qpair failed and we were unable to recover it. 00:30:38.954 [2024-04-17 06:56:43.319591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.319769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.319796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.956 qpair failed and we were unable to recover it. 00:30:38.956 [2024-04-17 06:56:43.319999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.320124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.320183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.956 qpair failed and we were unable to recover it. 00:30:38.956 [2024-04-17 06:56:43.320339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.320473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.320500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.956 qpair failed and we were unable to recover it. 00:30:38.956 [2024-04-17 06:56:43.320702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.320876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.320903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.956 qpair failed and we were unable to recover it. 00:30:38.956 [2024-04-17 06:56:43.321077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.321226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.321254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.956 qpair failed and we were unable to recover it. 00:30:38.956 [2024-04-17 06:56:43.321396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.321578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.321619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.956 qpair failed and we were unable to recover it. 00:30:38.956 [2024-04-17 06:56:43.321800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.321939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.321966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.956 qpair failed and we were unable to recover it. 00:30:38.956 [2024-04-17 06:56:43.322166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.322347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.322375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.956 qpair failed and we were unable to recover it. 00:30:38.956 [2024-04-17 06:56:43.322520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.322744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.322808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.956 qpair failed and we were unable to recover it. 00:30:38.956 [2024-04-17 06:56:43.322980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.323100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.323126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.956 qpair failed and we were unable to recover it. 00:30:38.956 [2024-04-17 06:56:43.323280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.323462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.323490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.956 qpair failed and we were unable to recover it. 00:30:38.956 [2024-04-17 06:56:43.323695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.323953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.323981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.956 qpair failed and we were unable to recover it. 00:30:38.956 [2024-04-17 06:56:43.324171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.324315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.324344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.956 qpair failed and we were unable to recover it. 00:30:38.956 [2024-04-17 06:56:43.324501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.324665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.324691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.956 qpair failed and we were unable to recover it. 00:30:38.956 [2024-04-17 06:56:43.324873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.325078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.325106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.956 qpair failed and we were unable to recover it. 00:30:38.956 [2024-04-17 06:56:43.325282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.325460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.325487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.956 qpair failed and we were unable to recover it. 00:30:38.956 [2024-04-17 06:56:43.325673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.325832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.325873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.956 qpair failed and we were unable to recover it. 00:30:38.956 [2024-04-17 06:56:43.326079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.326290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.326318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.956 qpair failed and we were unable to recover it. 00:30:38.956 [2024-04-17 06:56:43.326520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.326675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.326701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.956 qpair failed and we were unable to recover it. 00:30:38.956 [2024-04-17 06:56:43.326891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.327074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.327099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.956 qpair failed and we were unable to recover it. 00:30:38.956 [2024-04-17 06:56:43.327259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.327451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.327477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.956 qpair failed and we were unable to recover it. 00:30:38.956 [2024-04-17 06:56:43.327657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.956 [2024-04-17 06:56:43.327829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.327857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.957 qpair failed and we were unable to recover it. 00:30:38.957 [2024-04-17 06:56:43.328043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.328211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.328237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.957 qpair failed and we were unable to recover it. 00:30:38.957 [2024-04-17 06:56:43.328397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.328575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.328603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.957 qpair failed and we were unable to recover it. 00:30:38.957 [2024-04-17 06:56:43.328739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.328886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.328911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.957 qpair failed and we were unable to recover it. 00:30:38.957 [2024-04-17 06:56:43.329093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.329265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.329293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.957 qpair failed and we were unable to recover it. 00:30:38.957 [2024-04-17 06:56:43.329455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.329668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.329693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.957 qpair failed and we were unable to recover it. 00:30:38.957 [2024-04-17 06:56:43.329868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.330015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.330040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.957 qpair failed and we were unable to recover it. 00:30:38.957 [2024-04-17 06:56:43.330192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.330323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.330348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.957 qpair failed and we were unable to recover it. 00:30:38.957 [2024-04-17 06:56:43.330508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.330666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.330691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.957 qpair failed and we were unable to recover it. 00:30:38.957 [2024-04-17 06:56:43.330828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.331018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.331043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.957 qpair failed and we were unable to recover it. 00:30:38.957 [2024-04-17 06:56:43.331201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.331377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.331409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.957 qpair failed and we were unable to recover it. 00:30:38.957 [2024-04-17 06:56:43.331567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.331739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.331767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.957 qpair failed and we were unable to recover it. 00:30:38.957 [2024-04-17 06:56:43.331945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.332103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.332128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.957 qpair failed and we were unable to recover it. 00:30:38.957 [2024-04-17 06:56:43.332294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.332439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.332475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.957 qpair failed and we were unable to recover it. 00:30:38.957 [2024-04-17 06:56:43.332649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.332790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.332818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.957 qpair failed and we were unable to recover it. 00:30:38.957 [2024-04-17 06:56:43.332997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.333149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.333211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.957 qpair failed and we were unable to recover it. 00:30:38.957 [2024-04-17 06:56:43.333358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.333536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.333577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.957 qpair failed and we were unable to recover it. 00:30:38.957 [2024-04-17 06:56:43.333715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.333903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.333927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.957 qpair failed and we were unable to recover it. 00:30:38.957 [2024-04-17 06:56:43.334077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.334309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.334334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.957 qpair failed and we were unable to recover it. 00:30:38.957 [2024-04-17 06:56:43.334496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.334664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.334692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.957 qpair failed and we were unable to recover it. 00:30:38.957 [2024-04-17 06:56:43.334911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.335062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.335087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.957 qpair failed and we were unable to recover it. 00:30:38.957 [2024-04-17 06:56:43.335248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.335440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.335466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.957 qpair failed and we were unable to recover it. 00:30:38.957 [2024-04-17 06:56:43.335616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.335765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.335805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.957 qpair failed and we were unable to recover it. 00:30:38.957 [2024-04-17 06:56:43.335970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.336140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.336167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.957 qpair failed and we were unable to recover it. 00:30:38.957 [2024-04-17 06:56:43.336382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.336585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.336613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.957 qpair failed and we were unable to recover it. 00:30:38.957 [2024-04-17 06:56:43.336872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.337070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.337097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.957 qpair failed and we were unable to recover it. 00:30:38.957 [2024-04-17 06:56:43.337280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.337432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.337483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.957 qpair failed and we were unable to recover it. 00:30:38.957 [2024-04-17 06:56:43.337618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.337790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.337818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.957 qpair failed and we were unable to recover it. 00:30:38.957 [2024-04-17 06:56:43.337989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.338144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.957 [2024-04-17 06:56:43.338184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.957 qpair failed and we were unable to recover it. 00:30:38.957 [2024-04-17 06:56:43.338348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.338508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.338535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.958 qpair failed and we were unable to recover it. 00:30:38.958 [2024-04-17 06:56:43.338731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.338962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.339017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.958 qpair failed and we were unable to recover it. 00:30:38.958 [2024-04-17 06:56:43.339273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.339438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.339465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.958 qpair failed and we were unable to recover it. 00:30:38.958 [2024-04-17 06:56:43.339633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.339812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.339838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.958 qpair failed and we were unable to recover it. 00:30:38.958 [2024-04-17 06:56:43.340006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.340205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.340233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.958 qpair failed and we were unable to recover it. 00:30:38.958 [2024-04-17 06:56:43.340412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.340530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.340555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.958 qpair failed and we were unable to recover it. 00:30:38.958 [2024-04-17 06:56:43.340692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.340857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.340886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.958 qpair failed and we were unable to recover it. 00:30:38.958 [2024-04-17 06:56:43.341053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.341227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.341257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.958 qpair failed and we were unable to recover it. 00:30:38.958 [2024-04-17 06:56:43.341447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.341600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.341625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.958 qpair failed and we were unable to recover it. 00:30:38.958 [2024-04-17 06:56:43.341802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.341962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.341989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.958 qpair failed and we were unable to recover it. 00:30:38.958 [2024-04-17 06:56:43.342167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.342366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.342393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.958 qpair failed and we were unable to recover it. 00:30:38.958 [2024-04-17 06:56:43.342562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.342721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.342766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.958 qpair failed and we were unable to recover it. 00:30:38.958 [2024-04-17 06:56:43.342907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.343079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.343107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.958 qpair failed and we were unable to recover it. 00:30:38.958 [2024-04-17 06:56:43.343280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.343444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.343472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.958 qpair failed and we were unable to recover it. 00:30:38.958 [2024-04-17 06:56:43.343648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.343854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.343879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.958 qpair failed and we were unable to recover it. 00:30:38.958 [2024-04-17 06:56:43.344036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.344159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.344207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.958 qpair failed and we were unable to recover it. 00:30:38.958 [2024-04-17 06:56:43.344358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.344530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.344557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.958 qpair failed and we were unable to recover it. 00:30:38.958 [2024-04-17 06:56:43.344756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.344956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.344984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.958 qpair failed and we were unable to recover it. 00:30:38.958 [2024-04-17 06:56:43.345187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.345324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.345351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.958 qpair failed and we were unable to recover it. 00:30:38.958 [2024-04-17 06:56:43.345530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.345678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.345720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.958 qpair failed and we were unable to recover it. 00:30:38.958 [2024-04-17 06:56:43.345900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.346069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.346096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.958 qpair failed and we were unable to recover it. 00:30:38.958 [2024-04-17 06:56:43.346293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.346473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.346501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.958 qpair failed and we were unable to recover it. 00:30:38.958 [2024-04-17 06:56:43.346679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.346842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.346870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.958 qpair failed and we were unable to recover it. 00:30:38.958 [2024-04-17 06:56:43.347069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.347246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.347274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.958 qpair failed and we were unable to recover it. 00:30:38.958 [2024-04-17 06:56:43.347453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.347611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.347636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.958 qpair failed and we were unable to recover it. 00:30:38.958 [2024-04-17 06:56:43.347784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.347958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.347985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.958 qpair failed and we were unable to recover it. 00:30:38.958 [2024-04-17 06:56:43.348148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.348300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.348324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.958 qpair failed and we were unable to recover it. 00:30:38.958 [2024-04-17 06:56:43.348449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.348578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.348603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.958 qpair failed and we were unable to recover it. 00:30:38.958 [2024-04-17 06:56:43.348759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.348921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.958 [2024-04-17 06:56:43.348962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.958 qpair failed and we were unable to recover it. 00:30:38.958 [2024-04-17 06:56:43.349105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.349279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.349304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.959 qpair failed and we were unable to recover it. 00:30:38.959 [2024-04-17 06:56:43.349481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.349650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.349677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.959 qpair failed and we were unable to recover it. 00:30:38.959 [2024-04-17 06:56:43.349880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.350004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.350046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.959 qpair failed and we were unable to recover it. 00:30:38.959 [2024-04-17 06:56:43.350227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.350376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.350405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.959 qpair failed and we were unable to recover it. 00:30:38.959 [2024-04-17 06:56:43.350615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.350784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.350812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.959 qpair failed and we were unable to recover it. 00:30:38.959 [2024-04-17 06:56:43.350981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.351135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.351159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.959 qpair failed and we were unable to recover it. 00:30:38.959 [2024-04-17 06:56:43.351333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.351457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.351484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.959 qpair failed and we were unable to recover it. 00:30:38.959 [2024-04-17 06:56:43.351638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.351756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.351781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.959 qpair failed and we were unable to recover it. 00:30:38.959 [2024-04-17 06:56:43.351928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.352082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.352107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.959 qpair failed and we were unable to recover it. 00:30:38.959 [2024-04-17 06:56:43.352267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.352474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.352499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.959 qpair failed and we were unable to recover it. 00:30:38.959 [2024-04-17 06:56:43.352659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.352785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.352809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.959 qpair failed and we were unable to recover it. 00:30:38.959 [2024-04-17 06:56:43.352966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.353118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.353143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.959 qpair failed and we were unable to recover it. 00:30:38.959 [2024-04-17 06:56:43.353276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.353436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.353465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.959 qpair failed and we were unable to recover it. 00:30:38.959 [2024-04-17 06:56:43.353656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.353855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.353884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.959 qpair failed and we were unable to recover it. 00:30:38.959 [2024-04-17 06:56:43.354048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.354228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.354257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.959 qpair failed and we were unable to recover it. 00:30:38.959 [2024-04-17 06:56:43.354455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.354613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.354638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.959 qpair failed and we were unable to recover it. 00:30:38.959 [2024-04-17 06:56:43.354759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.354913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.354937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.959 qpair failed and we were unable to recover it. 00:30:38.959 [2024-04-17 06:56:43.355110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.355303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.355331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.959 qpair failed and we were unable to recover it. 00:30:38.959 [2024-04-17 06:56:43.355482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.355613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.355638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.959 qpair failed and we were unable to recover it. 00:30:38.959 [2024-04-17 06:56:43.355782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.355930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.355954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.959 qpair failed and we were unable to recover it. 00:30:38.959 [2024-04-17 06:56:43.356099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.356260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.356285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.959 qpair failed and we were unable to recover it. 00:30:38.959 [2024-04-17 06:56:43.356411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.356554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.356579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.959 qpair failed and we were unable to recover it. 00:30:38.959 [2024-04-17 06:56:43.356734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.356889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.356915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.959 qpair failed and we were unable to recover it. 00:30:38.959 [2024-04-17 06:56:43.357072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.357238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.357263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.959 qpair failed and we were unable to recover it. 00:30:38.959 [2024-04-17 06:56:43.357396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.357575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.357602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.959 qpair failed and we were unable to recover it. 00:30:38.959 [2024-04-17 06:56:43.357769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.357937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.357966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.959 qpair failed and we were unable to recover it. 00:30:38.959 [2024-04-17 06:56:43.358183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.358323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.358348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.959 qpair failed and we were unable to recover it. 00:30:38.959 [2024-04-17 06:56:43.358507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.358682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.358710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.959 qpair failed and we were unable to recover it. 00:30:38.959 [2024-04-17 06:56:43.358873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.359048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.959 [2024-04-17 06:56:43.359075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.959 qpair failed and we were unable to recover it. 00:30:38.959 [2024-04-17 06:56:43.359257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.359466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.359491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.960 qpair failed and we were unable to recover it. 00:30:38.960 [2024-04-17 06:56:43.359671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.359851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.359879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.960 qpair failed and we were unable to recover it. 00:30:38.960 [2024-04-17 06:56:43.360072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.360211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.360240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.960 qpair failed and we were unable to recover it. 00:30:38.960 [2024-04-17 06:56:43.360410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.360554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.360581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.960 qpair failed and we were unable to recover it. 00:30:38.960 [2024-04-17 06:56:43.360747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.360941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.360969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.960 qpair failed and we were unable to recover it. 00:30:38.960 [2024-04-17 06:56:43.361146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.361346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.361375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.960 qpair failed and we were unable to recover it. 00:30:38.960 [2024-04-17 06:56:43.361584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.361782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.361810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.960 qpair failed and we were unable to recover it. 00:30:38.960 [2024-04-17 06:56:43.361991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.362190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.362219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.960 qpair failed and we were unable to recover it. 00:30:38.960 [2024-04-17 06:56:43.362419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.362615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.362643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.960 qpair failed and we were unable to recover it. 00:30:38.960 [2024-04-17 06:56:43.362786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.362911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.362936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.960 qpair failed and we were unable to recover it. 00:30:38.960 [2024-04-17 06:56:43.363123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.363286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.363313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.960 qpair failed and we were unable to recover it. 00:30:38.960 [2024-04-17 06:56:43.363462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.363644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.363671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.960 qpair failed and we were unable to recover it. 00:30:38.960 [2024-04-17 06:56:43.363838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.364042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.364069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.960 qpair failed and we were unable to recover it. 00:30:38.960 [2024-04-17 06:56:43.364280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.364442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.364473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.960 qpair failed and we were unable to recover it. 00:30:38.960 [2024-04-17 06:56:43.364620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.364764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.364791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.960 qpair failed and we were unable to recover it. 00:30:38.960 [2024-04-17 06:56:43.364955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.365131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.365159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.960 qpair failed and we were unable to recover it. 00:30:38.960 [2024-04-17 06:56:43.365319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.365465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.365492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.960 qpair failed and we were unable to recover it. 00:30:38.960 [2024-04-17 06:56:43.365697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.365899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.365927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.960 qpair failed and we were unable to recover it. 00:30:38.960 [2024-04-17 06:56:43.366101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.366246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.366275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.960 qpair failed and we were unable to recover it. 00:30:38.960 [2024-04-17 06:56:43.366425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.366633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.366660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.960 qpair failed and we were unable to recover it. 00:30:38.960 [2024-04-17 06:56:43.366825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.367020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.367047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.960 qpair failed and we were unable to recover it. 00:30:38.960 [2024-04-17 06:56:43.367222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.367390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.367418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.960 qpair failed and we were unable to recover it. 00:30:38.960 [2024-04-17 06:56:43.367596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.367862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.367904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.960 qpair failed and we were unable to recover it. 00:30:38.960 [2024-04-17 06:56:43.368083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.368272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.368298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.960 qpair failed and we were unable to recover it. 00:30:38.960 [2024-04-17 06:56:43.368447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.368584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.960 [2024-04-17 06:56:43.368615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.960 qpair failed and we were unable to recover it. 00:30:38.961 [2024-04-17 06:56:43.368795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.368997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.369024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.961 qpair failed and we were unable to recover it. 00:30:38.961 [2024-04-17 06:56:43.369212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.369371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.369396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.961 qpair failed and we were unable to recover it. 00:30:38.961 [2024-04-17 06:56:43.369556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.369718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.369742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.961 qpair failed and we were unable to recover it. 00:30:38.961 [2024-04-17 06:56:43.369924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.370068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.370095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.961 qpair failed and we were unable to recover it. 00:30:38.961 [2024-04-17 06:56:43.370269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.370415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.370440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.961 qpair failed and we were unable to recover it. 00:30:38.961 [2024-04-17 06:56:43.370590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.370801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.370826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.961 qpair failed and we were unable to recover it. 00:30:38.961 [2024-04-17 06:56:43.371003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.371179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.371208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.961 qpair failed and we were unable to recover it. 00:30:38.961 [2024-04-17 06:56:43.371393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.371523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.371548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.961 qpair failed and we were unable to recover it. 00:30:38.961 [2024-04-17 06:56:43.371732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.371884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.371909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.961 qpair failed and we were unable to recover it. 00:30:38.961 [2024-04-17 06:56:43.372069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.372216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.372242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.961 qpair failed and we were unable to recover it. 00:30:38.961 [2024-04-17 06:56:43.372419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.372594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.372628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.961 qpair failed and we were unable to recover it. 00:30:38.961 [2024-04-17 06:56:43.372814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.372964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.372989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.961 qpair failed and we were unable to recover it. 00:30:38.961 [2024-04-17 06:56:43.373135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.373305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.373331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.961 qpair failed and we were unable to recover it. 00:30:38.961 [2024-04-17 06:56:43.373467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.373645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.373670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.961 qpair failed and we were unable to recover it. 00:30:38.961 [2024-04-17 06:56:43.373813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.374023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.374049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.961 qpair failed and we were unable to recover it. 00:30:38.961 [2024-04-17 06:56:43.374186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.374335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.374364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.961 qpair failed and we were unable to recover it. 00:30:38.961 [2024-04-17 06:56:43.374571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.374735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.374776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.961 qpair failed and we were unable to recover it. 00:30:38.961 [2024-04-17 06:56:43.374978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.375144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.375171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.961 qpair failed and we were unable to recover it. 00:30:38.961 [2024-04-17 06:56:43.375363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.375484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.375510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.961 qpair failed and we were unable to recover it. 00:30:38.961 [2024-04-17 06:56:43.375723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.375903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.375946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.961 qpair failed and we were unable to recover it. 00:30:38.961 [2024-04-17 06:56:43.376149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.376293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.376319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.961 qpair failed and we were unable to recover it. 00:30:38.961 [2024-04-17 06:56:43.376475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.376650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.376677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.961 qpair failed and we were unable to recover it. 00:30:38.961 [2024-04-17 06:56:43.376841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.377008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.377033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.961 qpair failed and we were unable to recover it. 00:30:38.961 [2024-04-17 06:56:43.377167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.377380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.377407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.961 qpair failed and we were unable to recover it. 00:30:38.961 [2024-04-17 06:56:43.377588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.377710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.377735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.961 qpair failed and we were unable to recover it. 00:30:38.961 [2024-04-17 06:56:43.377868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.378031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.378058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.961 qpair failed and we were unable to recover it. 00:30:38.961 [2024-04-17 06:56:43.378222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.378372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.378397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.961 qpair failed and we were unable to recover it. 00:30:38.961 [2024-04-17 06:56:43.378599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.378778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.378803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.961 qpair failed and we were unable to recover it. 00:30:38.961 [2024-04-17 06:56:43.378955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.379155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.961 [2024-04-17 06:56:43.379190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.962 qpair failed and we were unable to recover it. 00:30:38.962 [2024-04-17 06:56:43.379340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.379502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.379530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.962 qpair failed and we were unable to recover it. 00:30:38.962 [2024-04-17 06:56:43.379679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.379855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.379883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.962 qpair failed and we were unable to recover it. 00:30:38.962 [2024-04-17 06:56:43.380048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.380216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.380245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.962 qpair failed and we were unable to recover it. 00:30:38.962 [2024-04-17 06:56:43.380407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.380562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.380603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.962 qpair failed and we were unable to recover it. 00:30:38.962 [2024-04-17 06:56:43.380747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.380918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.380945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.962 qpair failed and we were unable to recover it. 00:30:38.962 [2024-04-17 06:56:43.381083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.381255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.381283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.962 qpair failed and we were unable to recover it. 00:30:38.962 [2024-04-17 06:56:43.381442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.381620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.381647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.962 qpair failed and we were unable to recover it. 00:30:38.962 [2024-04-17 06:56:43.381824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.381974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.382016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.962 qpair failed and we were unable to recover it. 00:30:38.962 [2024-04-17 06:56:43.382191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.382366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.382393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.962 qpair failed and we were unable to recover it. 00:30:38.962 [2024-04-17 06:56:43.382536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.382680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.382707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.962 qpair failed and we were unable to recover it. 00:30:38.962 [2024-04-17 06:56:43.382870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.383033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.383061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.962 qpair failed and we were unable to recover it. 00:30:38.962 [2024-04-17 06:56:43.383275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.383409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.383436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.962 qpair failed and we were unable to recover it. 00:30:38.962 [2024-04-17 06:56:43.383614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.383747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.383775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.962 qpair failed and we were unable to recover it. 00:30:38.962 [2024-04-17 06:56:43.383911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.384085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.384113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.962 qpair failed and we were unable to recover it. 00:30:38.962 [2024-04-17 06:56:43.384268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.384433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.384461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.962 qpair failed and we were unable to recover it. 00:30:38.962 [2024-04-17 06:56:43.384607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.384764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.384788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.962 qpair failed and we were unable to recover it. 00:30:38.962 [2024-04-17 06:56:43.384967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.385127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.385155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.962 qpair failed and we were unable to recover it. 00:30:38.962 [2024-04-17 06:56:43.385306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.385486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.385513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.962 qpair failed and we were unable to recover it. 00:30:38.962 [2024-04-17 06:56:43.385678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.385880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.385908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.962 qpair failed and we were unable to recover it. 00:30:38.962 [2024-04-17 06:56:43.386086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.386235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.386278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.962 qpair failed and we were unable to recover it. 00:30:38.962 [2024-04-17 06:56:43.386443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.386596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.386624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.962 qpair failed and we were unable to recover it. 00:30:38.962 [2024-04-17 06:56:43.386762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.386930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.386958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.962 qpair failed and we were unable to recover it. 00:30:38.962 [2024-04-17 06:56:43.387121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.387287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.387315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.962 qpair failed and we were unable to recover it. 00:30:38.962 [2024-04-17 06:56:43.387467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.387614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.387639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.962 qpair failed and we were unable to recover it. 00:30:38.962 [2024-04-17 06:56:43.387792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.387964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.387989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.962 qpair failed and we were unable to recover it. 00:30:38.962 [2024-04-17 06:56:43.388171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.388330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.388358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.962 qpair failed and we were unable to recover it. 00:30:38.962 [2024-04-17 06:56:43.388532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.388701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.388728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.962 qpair failed and we were unable to recover it. 00:30:38.962 [2024-04-17 06:56:43.388873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.389032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.389072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.962 qpair failed and we were unable to recover it. 00:30:38.962 [2024-04-17 06:56:43.389252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.389386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.962 [2024-04-17 06:56:43.389414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.963 qpair failed and we were unable to recover it. 00:30:38.963 [2024-04-17 06:56:43.389586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.389775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.389820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.963 qpair failed and we were unable to recover it. 00:30:38.963 [2024-04-17 06:56:43.389984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.390150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.390189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.963 qpair failed and we were unable to recover it. 00:30:38.963 [2024-04-17 06:56:43.390329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.390478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.390504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.963 qpair failed and we were unable to recover it. 00:30:38.963 [2024-04-17 06:56:43.390680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.390889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.390921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.963 qpair failed and we were unable to recover it. 00:30:38.963 [2024-04-17 06:56:43.391122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.391270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.391298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.963 qpair failed and we were unable to recover it. 00:30:38.963 [2024-04-17 06:56:43.391464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.391591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.391619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.963 qpair failed and we were unable to recover it. 00:30:38.963 [2024-04-17 06:56:43.391792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.391995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.392022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.963 qpair failed and we were unable to recover it. 00:30:38.963 [2024-04-17 06:56:43.392160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.392352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.392380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.963 qpair failed and we were unable to recover it. 00:30:38.963 [2024-04-17 06:56:43.392512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.392642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.392669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.963 qpair failed and we were unable to recover it. 00:30:38.963 [2024-04-17 06:56:43.392840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.393041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.393069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.963 qpair failed and we were unable to recover it. 00:30:38.963 [2024-04-17 06:56:43.393269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.393445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.393485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.963 qpair failed and we were unable to recover it. 00:30:38.963 [2024-04-17 06:56:43.393663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.393861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.393889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.963 qpair failed and we were unable to recover it. 00:30:38.963 [2024-04-17 06:56:43.394097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.394233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.394259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.963 qpair failed and we were unable to recover it. 00:30:38.963 [2024-04-17 06:56:43.394482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.394633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.394658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.963 qpair failed and we were unable to recover it. 00:30:38.963 [2024-04-17 06:56:43.394816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.394979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.395006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.963 qpair failed and we were unable to recover it. 00:30:38.963 [2024-04-17 06:56:43.395158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.395313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.395338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.963 qpair failed and we were unable to recover it. 00:30:38.963 [2024-04-17 06:56:43.395485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.395660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.395688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.963 qpair failed and we were unable to recover it. 00:30:38.963 [2024-04-17 06:56:43.395835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.396003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.396031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.963 qpair failed and we were unable to recover it. 00:30:38.963 [2024-04-17 06:56:43.396210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.396386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.396414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.963 qpair failed and we were unable to recover it. 00:30:38.963 [2024-04-17 06:56:43.396552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.396754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.396781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.963 qpair failed and we were unable to recover it. 00:30:38.963 [2024-04-17 06:56:43.396977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.397183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.397211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.963 qpair failed and we were unable to recover it. 00:30:38.963 [2024-04-17 06:56:43.397372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.397520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.397547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.963 qpair failed and we were unable to recover it. 00:30:38.963 [2024-04-17 06:56:43.397704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.397886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.397928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.963 qpair failed and we were unable to recover it. 00:30:38.963 [2024-04-17 06:56:43.398074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.398287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.398313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.963 qpair failed and we were unable to recover it. 00:30:38.963 [2024-04-17 06:56:43.398500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.398645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.398672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.963 qpair failed and we were unable to recover it. 00:30:38.963 [2024-04-17 06:56:43.398845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.398982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.399010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.963 qpair failed and we were unable to recover it. 00:30:38.963 [2024-04-17 06:56:43.399215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.399385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.399426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.963 qpair failed and we were unable to recover it. 00:30:38.963 [2024-04-17 06:56:43.399561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.399744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.399769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.963 qpair failed and we were unable to recover it. 00:30:38.963 [2024-04-17 06:56:43.399899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.400088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.963 [2024-04-17 06:56:43.400113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.964 qpair failed and we were unable to recover it. 00:30:38.964 [2024-04-17 06:56:43.400244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.400381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.400406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.964 qpair failed and we were unable to recover it. 00:30:38.964 [2024-04-17 06:56:43.400559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.400707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.400748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.964 qpair failed and we were unable to recover it. 00:30:38.964 [2024-04-17 06:56:43.400960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.401104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.401129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.964 qpair failed and we were unable to recover it. 00:30:38.964 [2024-04-17 06:56:43.401287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.401410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.401435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.964 qpair failed and we were unable to recover it. 00:30:38.964 [2024-04-17 06:56:43.401620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.401767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.401792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.964 qpair failed and we were unable to recover it. 00:30:38.964 [2024-04-17 06:56:43.401923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.402073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.402114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.964 qpair failed and we were unable to recover it. 00:30:38.964 [2024-04-17 06:56:43.402291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.402466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.402494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.964 qpair failed and we were unable to recover it. 00:30:38.964 [2024-04-17 06:56:43.402695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.402877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.402902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.964 qpair failed and we were unable to recover it. 00:30:38.964 [2024-04-17 06:56:43.403025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.403208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.403251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.964 qpair failed and we were unable to recover it. 00:30:38.964 [2024-04-17 06:56:43.403399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.403547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.403571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.964 qpair failed and we were unable to recover it. 00:30:38.964 [2024-04-17 06:56:43.403725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.403875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.403902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.964 qpair failed and we were unable to recover it. 00:30:38.964 [2024-04-17 06:56:43.404081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.404266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.404307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.964 qpair failed and we were unable to recover it. 00:30:38.964 [2024-04-17 06:56:43.404439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.404654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.404679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.964 qpair failed and we were unable to recover it. 00:30:38.964 [2024-04-17 06:56:43.404831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.404952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.404979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.964 qpair failed and we were unable to recover it. 00:30:38.964 [2024-04-17 06:56:43.405183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.405307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.405332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.964 qpair failed and we were unable to recover it. 00:30:38.964 [2024-04-17 06:56:43.405507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.405677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.405705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.964 qpair failed and we were unable to recover it. 00:30:38.964 [2024-04-17 06:56:43.405918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.406072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.406112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.964 qpair failed and we were unable to recover it. 00:30:38.964 [2024-04-17 06:56:43.406296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.406468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.406497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.964 qpair failed and we were unable to recover it. 00:30:38.964 [2024-04-17 06:56:43.406662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.406844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.406870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.964 qpair failed and we were unable to recover it. 00:30:38.964 [2024-04-17 06:56:43.407022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.407257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.407283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.964 qpair failed and we were unable to recover it. 00:30:38.964 [2024-04-17 06:56:43.407460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.407599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.407627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.964 qpair failed and we were unable to recover it. 00:30:38.964 [2024-04-17 06:56:43.407827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.407971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.407995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.964 qpair failed and we were unable to recover it. 00:30:38.964 [2024-04-17 06:56:43.408130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.408343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.408368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.964 qpair failed and we were unable to recover it. 00:30:38.964 [2024-04-17 06:56:43.408522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.408730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.408757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.964 qpair failed and we were unable to recover it. 00:30:38.964 [2024-04-17 06:56:43.408889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.409066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.409090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.964 qpair failed and we were unable to recover it. 00:30:38.964 [2024-04-17 06:56:43.409269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.409437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.409479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.964 qpair failed and we were unable to recover it. 00:30:38.964 [2024-04-17 06:56:43.409654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.409825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.409854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.964 qpair failed and we were unable to recover it. 00:30:38.964 [2024-04-17 06:56:43.410016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.410214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.410242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.964 qpair failed and we were unable to recover it. 00:30:38.964 [2024-04-17 06:56:43.410389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.410524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.964 [2024-04-17 06:56:43.410552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.964 qpair failed and we were unable to recover it. 00:30:38.965 [2024-04-17 06:56:43.410753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.410986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.411039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.965 qpair failed and we were unable to recover it. 00:30:38.965 [2024-04-17 06:56:43.411218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.411381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.411422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.965 qpair failed and we were unable to recover it. 00:30:38.965 [2024-04-17 06:56:43.411570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.411735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.411763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.965 qpair failed and we were unable to recover it. 00:30:38.965 [2024-04-17 06:56:43.411963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.412136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.412161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.965 qpair failed and we were unable to recover it. 00:30:38.965 [2024-04-17 06:56:43.412324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.412447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.412488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.965 qpair failed and we were unable to recover it. 00:30:38.965 [2024-04-17 06:56:43.412647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.412846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.412873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.965 qpair failed and we were unable to recover it. 00:30:38.965 [2024-04-17 06:56:43.413034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.413171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.413206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.965 qpair failed and we were unable to recover it. 00:30:38.965 [2024-04-17 06:56:43.413348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.413518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.413545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.965 qpair failed and we were unable to recover it. 00:30:38.965 [2024-04-17 06:56:43.413758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.413894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.413922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.965 qpair failed and we were unable to recover it. 00:30:38.965 [2024-04-17 06:56:43.414099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.414264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.414292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.965 qpair failed and we were unable to recover it. 00:30:38.965 [2024-04-17 06:56:43.414493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.414644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.414671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.965 qpair failed and we were unable to recover it. 00:30:38.965 [2024-04-17 06:56:43.414811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.414980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.415007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.965 qpair failed and we were unable to recover it. 00:30:38.965 [2024-04-17 06:56:43.415145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.415284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.415309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.965 qpair failed and we were unable to recover it. 00:30:38.965 [2024-04-17 06:56:43.415471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.415641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.415668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.965 qpair failed and we were unable to recover it. 00:30:38.965 [2024-04-17 06:56:43.415831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.416029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.416056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.965 qpair failed and we were unable to recover it. 00:30:38.965 [2024-04-17 06:56:43.416208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.416381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.416409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.965 qpair failed and we were unable to recover it. 00:30:38.965 [2024-04-17 06:56:43.416630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.416815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.416861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.965 qpair failed and we were unable to recover it. 00:30:38.965 [2024-04-17 06:56:43.417061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.417203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.417231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.965 qpair failed and we were unable to recover it. 00:30:38.965 [2024-04-17 06:56:43.417366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.417578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.417602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.965 qpair failed and we were unable to recover it. 00:30:38.965 [2024-04-17 06:56:43.417761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.417974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.418001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.965 qpair failed and we were unable to recover it. 00:30:38.965 [2024-04-17 06:56:43.418144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.418301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.418327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.965 qpair failed and we were unable to recover it. 00:30:38.965 [2024-04-17 06:56:43.418547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.418740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.418764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.965 qpair failed and we were unable to recover it. 00:30:38.965 [2024-04-17 06:56:43.418963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.419125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.419152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.965 qpair failed and we were unable to recover it. 00:30:38.965 [2024-04-17 06:56:43.419318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.419461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.965 [2024-04-17 06:56:43.419491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.965 qpair failed and we were unable to recover it. 00:30:38.966 [2024-04-17 06:56:43.419632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.419778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.419802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.966 qpair failed and we were unable to recover it. 00:30:38.966 [2024-04-17 06:56:43.419965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.420109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.420138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.966 qpair failed and we were unable to recover it. 00:30:38.966 [2024-04-17 06:56:43.420319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.420481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.420522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.966 qpair failed and we were unable to recover it. 00:30:38.966 [2024-04-17 06:56:43.420730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.420858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.420883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.966 qpair failed and we were unable to recover it. 00:30:38.966 [2024-04-17 06:56:43.421067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.421197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.421222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.966 qpair failed and we were unable to recover it. 00:30:38.966 [2024-04-17 06:56:43.421347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.421495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.421523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.966 qpair failed and we were unable to recover it. 00:30:38.966 [2024-04-17 06:56:43.421686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.421847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.421875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.966 qpair failed and we were unable to recover it. 00:30:38.966 [2024-04-17 06:56:43.422045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.422216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.422245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.966 qpair failed and we were unable to recover it. 00:30:38.966 [2024-04-17 06:56:43.422405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.422532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.422557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.966 qpair failed and we were unable to recover it. 00:30:38.966 [2024-04-17 06:56:43.422692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.422845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.422870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.966 qpair failed and we were unable to recover it. 00:30:38.966 [2024-04-17 06:56:43.423091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.423250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.423295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.966 qpair failed and we were unable to recover it. 00:30:38.966 [2024-04-17 06:56:43.423501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.423644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.423669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.966 qpair failed and we were unable to recover it. 00:30:38.966 [2024-04-17 06:56:43.423847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.423997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.424022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.966 qpair failed and we were unable to recover it. 00:30:38.966 [2024-04-17 06:56:43.424208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.424349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.424376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.966 qpair failed and we were unable to recover it. 00:30:38.966 [2024-04-17 06:56:43.424547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.424772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.424796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.966 qpair failed and we were unable to recover it. 00:30:38.966 [2024-04-17 06:56:43.424943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.425096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.425121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.966 qpair failed and we were unable to recover it. 00:30:38.966 [2024-04-17 06:56:43.425248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.425365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.425390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.966 qpair failed and we were unable to recover it. 00:30:38.966 [2024-04-17 06:56:43.425530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.425739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.425764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.966 qpair failed and we were unable to recover it. 00:30:38.966 [2024-04-17 06:56:43.425913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.426059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.426085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.966 qpair failed and we were unable to recover it. 00:30:38.966 [2024-04-17 06:56:43.426233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.426355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.426380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.966 qpair failed and we were unable to recover it. 00:30:38.966 [2024-04-17 06:56:43.426515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.426635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.426660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.966 qpair failed and we were unable to recover it. 00:30:38.966 [2024-04-17 06:56:43.426793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.426974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.427001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.966 qpair failed and we were unable to recover it. 00:30:38.966 [2024-04-17 06:56:43.427174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.427318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.427347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.966 qpair failed and we were unable to recover it. 00:30:38.966 [2024-04-17 06:56:43.427560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.427759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.427791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.966 qpair failed and we were unable to recover it. 00:30:38.966 [2024-04-17 06:56:43.427979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.428130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.428155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:38.966 qpair failed and we were unable to recover it. 00:30:38.966 [2024-04-17 06:56:43.428281] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed3a0 is same with the state(5) to be set 00:30:38.966 [2024-04-17 06:56:43.428512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.428804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.428838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73dc000b90 with addr=10.0.0.2, port=4420 00:30:38.966 qpair failed and we were unable to recover it. 00:30:38.966 [2024-04-17 06:56:43.429047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.429234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.429262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73dc000b90 with addr=10.0.0.2, port=4420 00:30:38.966 qpair failed and we were unable to recover it. 00:30:38.966 [2024-04-17 06:56:43.429388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.429515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.429541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73dc000b90 with addr=10.0.0.2, port=4420 00:30:38.966 qpair failed and we were unable to recover it. 00:30:38.966 [2024-04-17 06:56:43.429697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.966 [2024-04-17 06:56:43.429853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.429878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73dc000b90 with addr=10.0.0.2, port=4420 00:30:38.967 qpair failed and we were unable to recover it. 00:30:38.967 [2024-04-17 06:56:43.430068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.430251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.430278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73dc000b90 with addr=10.0.0.2, port=4420 00:30:38.967 qpair failed and we were unable to recover it. 00:30:38.967 [2024-04-17 06:56:43.430408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.430581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.430607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73dc000b90 with addr=10.0.0.2, port=4420 00:30:38.967 qpair failed and we were unable to recover it. 00:30:38.967 [2024-04-17 06:56:43.430767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.430934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.430963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73dc000b90 with addr=10.0.0.2, port=4420 00:30:38.967 qpair failed and we were unable to recover it. 00:30:38.967 [2024-04-17 06:56:43.431094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.431232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.431258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73dc000b90 with addr=10.0.0.2, port=4420 00:30:38.967 qpair failed and we were unable to recover it. 00:30:38.967 [2024-04-17 06:56:43.431405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.431590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.431619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.967 qpair failed and we were unable to recover it. 00:30:38.967 [2024-04-17 06:56:43.431777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.431943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.431985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.967 qpair failed and we were unable to recover it. 00:30:38.967 [2024-04-17 06:56:43.432146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.432293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.432319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.967 qpair failed and we were unable to recover it. 00:30:38.967 [2024-04-17 06:56:43.432453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.432581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.432606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.967 qpair failed and we were unable to recover it. 00:30:38.967 [2024-04-17 06:56:43.432789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.432910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.432935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.967 qpair failed and we were unable to recover it. 00:30:38.967 [2024-04-17 06:56:43.433116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.433243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.433269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.967 qpair failed and we were unable to recover it. 00:30:38.967 [2024-04-17 06:56:43.433444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.433634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.433676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.967 qpair failed and we were unable to recover it. 00:30:38.967 [2024-04-17 06:56:43.433838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.433984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.434009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.967 qpair failed and we were unable to recover it. 00:30:38.967 [2024-04-17 06:56:43.434168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.434311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.434337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.967 qpair failed and we were unable to recover it. 00:30:38.967 [2024-04-17 06:56:43.434517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.434705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.434747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.967 qpair failed and we were unable to recover it. 00:30:38.967 [2024-04-17 06:56:43.434906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.435047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.435073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.967 qpair failed and we were unable to recover it. 00:30:38.967 [2024-04-17 06:56:43.435264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.435462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.435503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.967 qpair failed and we were unable to recover it. 00:30:38.967 [2024-04-17 06:56:43.435676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.435870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.435914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.967 qpair failed and we were unable to recover it. 00:30:38.967 [2024-04-17 06:56:43.436055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.436236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.436265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.967 qpair failed and we were unable to recover it. 00:30:38.967 [2024-04-17 06:56:43.436441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.436630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.436672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.967 qpair failed and we were unable to recover it. 00:30:38.967 [2024-04-17 06:56:43.436848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.437047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.437072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.967 qpair failed and we were unable to recover it. 00:30:38.967 [2024-04-17 06:56:43.437215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.437371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.437415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.967 qpair failed and we were unable to recover it. 00:30:38.967 [2024-04-17 06:56:43.437606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.437794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.437838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.967 qpair failed and we were unable to recover it. 00:30:38.967 [2024-04-17 06:56:43.437997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.438137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.438162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.967 qpair failed and we were unable to recover it. 00:30:38.967 [2024-04-17 06:56:43.438354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.438522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.438564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.967 qpair failed and we were unable to recover it. 00:30:38.967 [2024-04-17 06:56:43.438770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.438979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.439004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.967 qpair failed and we were unable to recover it. 00:30:38.967 [2024-04-17 06:56:43.439154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.439368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.439413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.967 qpair failed and we were unable to recover it. 00:30:38.967 [2024-04-17 06:56:43.439598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.439767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.439812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.967 qpair failed and we were unable to recover it. 00:30:38.967 [2024-04-17 06:56:43.440004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.440156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.967 [2024-04-17 06:56:43.440191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.967 qpair failed and we were unable to recover it. 00:30:38.967 [2024-04-17 06:56:43.440376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.440602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.440644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.968 qpair failed and we were unable to recover it. 00:30:38.968 [2024-04-17 06:56:43.440806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.441003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.441029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.968 qpair failed and we were unable to recover it. 00:30:38.968 [2024-04-17 06:56:43.441182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.441359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.441401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.968 qpair failed and we were unable to recover it. 00:30:38.968 [2024-04-17 06:56:43.441601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.441774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.441799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.968 qpair failed and we were unable to recover it. 00:30:38.968 [2024-04-17 06:56:43.441981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.442105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.442131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.968 qpair failed and we were unable to recover it. 00:30:38.968 [2024-04-17 06:56:43.442290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.442476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.442518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.968 qpair failed and we were unable to recover it. 00:30:38.968 [2024-04-17 06:56:43.442703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.442906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.442933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.968 qpair failed and we were unable to recover it. 00:30:38.968 [2024-04-17 06:56:43.443094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.443275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.443320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.968 qpair failed and we were unable to recover it. 00:30:38.968 [2024-04-17 06:56:43.443507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.443678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.443721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.968 qpair failed and we were unable to recover it. 00:30:38.968 [2024-04-17 06:56:43.443847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.443995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.444021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.968 qpair failed and we were unable to recover it. 00:30:38.968 [2024-04-17 06:56:43.444150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.444328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.444372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.968 qpair failed and we were unable to recover it. 00:30:38.968 [2024-04-17 06:56:43.444550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.444726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.444753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.968 qpair failed and we were unable to recover it. 00:30:38.968 [2024-04-17 06:56:43.444904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.445052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.445077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.968 qpair failed and we were unable to recover it. 00:30:38.968 [2024-04-17 06:56:43.445252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.445421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.445449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.968 qpair failed and we were unable to recover it. 00:30:38.968 [2024-04-17 06:56:43.445653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.445803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.445829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.968 qpair failed and we were unable to recover it. 00:30:38.968 [2024-04-17 06:56:43.445982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.446108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.446133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.968 qpair failed and we were unable to recover it. 00:30:38.968 [2024-04-17 06:56:43.446300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.446430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.446456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.968 qpair failed and we were unable to recover it. 00:30:38.968 [2024-04-17 06:56:43.446619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.446746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.446772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.968 qpair failed and we were unable to recover it. 00:30:38.968 [2024-04-17 06:56:43.446929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.447085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.447112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.968 qpair failed and we were unable to recover it. 00:30:38.968 [2024-04-17 06:56:43.447274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.447475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.447518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.968 qpair failed and we were unable to recover it. 00:30:38.968 [2024-04-17 06:56:43.447674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.447847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.447873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.968 qpair failed and we were unable to recover it. 00:30:38.968 [2024-04-17 06:56:43.448023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.448182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.448208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.968 qpair failed and we were unable to recover it. 00:30:38.968 [2024-04-17 06:56:43.448366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.448527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.448570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.968 qpair failed and we were unable to recover it. 00:30:38.968 [2024-04-17 06:56:43.448751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.448900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.448926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.968 qpair failed and we were unable to recover it. 00:30:38.968 [2024-04-17 06:56:43.449081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.449288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.449332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.968 qpair failed and we were unable to recover it. 00:30:38.968 [2024-04-17 06:56:43.449503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.449698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.449741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.968 qpair failed and we were unable to recover it. 00:30:38.968 [2024-04-17 06:56:43.449927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.450077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.450109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.968 qpair failed and we were unable to recover it. 00:30:38.968 [2024-04-17 06:56:43.450299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.450482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.450511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.968 qpair failed and we were unable to recover it. 00:30:38.968 [2024-04-17 06:56:43.450706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.450901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.968 [2024-04-17 06:56:43.450943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.968 qpair failed and we were unable to recover it. 00:30:38.969 [2024-04-17 06:56:43.451065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.451245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.451289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.969 qpair failed and we were unable to recover it. 00:30:38.969 [2024-04-17 06:56:43.451463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.451648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.451690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.969 qpair failed and we were unable to recover it. 00:30:38.969 [2024-04-17 06:56:43.451901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.452079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.452104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.969 qpair failed and we were unable to recover it. 00:30:38.969 [2024-04-17 06:56:43.452266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.452424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.452452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.969 qpair failed and we were unable to recover it. 00:30:38.969 [2024-04-17 06:56:43.452665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.452815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.452859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.969 qpair failed and we were unable to recover it. 00:30:38.969 [2024-04-17 06:56:43.452995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.453149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.453186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.969 qpair failed and we were unable to recover it. 00:30:38.969 [2024-04-17 06:56:43.453347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.453559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.453602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.969 qpair failed and we were unable to recover it. 00:30:38.969 [2024-04-17 06:56:43.453805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.454004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.454033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.969 qpair failed and we were unable to recover it. 00:30:38.969 [2024-04-17 06:56:43.454184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.454406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.454448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.969 qpair failed and we were unable to recover it. 00:30:38.969 [2024-04-17 06:56:43.454597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.454794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.454839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.969 qpair failed and we were unable to recover it. 00:30:38.969 [2024-04-17 06:56:43.454993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.455149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.455187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.969 qpair failed and we were unable to recover it. 00:30:38.969 [2024-04-17 06:56:43.455335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.455538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.455569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.969 qpair failed and we were unable to recover it. 00:30:38.969 [2024-04-17 06:56:43.455791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.455979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.456022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.969 qpair failed and we were unable to recover it. 00:30:38.969 [2024-04-17 06:56:43.456146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.456313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.456357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.969 qpair failed and we were unable to recover it. 00:30:38.969 [2024-04-17 06:56:43.456564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.456783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.456825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.969 qpair failed and we were unable to recover it. 00:30:38.969 [2024-04-17 06:56:43.456974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.457123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.457148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.969 qpair failed and we were unable to recover it. 00:30:38.969 [2024-04-17 06:56:43.457331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.457536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.457580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.969 qpair failed and we were unable to recover it. 00:30:38.969 [2024-04-17 06:56:43.457769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.457991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.458039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.969 qpair failed and we were unable to recover it. 00:30:38.969 [2024-04-17 06:56:43.458165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.458386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.458435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.969 qpair failed and we were unable to recover it. 00:30:38.969 [2024-04-17 06:56:43.458642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.458816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.458844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.969 qpair failed and we were unable to recover it. 00:30:38.969 [2024-04-17 06:56:43.458988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.459119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.459145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.969 qpair failed and we were unable to recover it. 00:30:38.969 [2024-04-17 06:56:43.459344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.459508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.459536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.969 qpair failed and we were unable to recover it. 00:30:38.969 [2024-04-17 06:56:43.459761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.459987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.460029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.969 qpair failed and we were unable to recover it. 00:30:38.969 [2024-04-17 06:56:43.460210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.460434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.460487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.969 qpair failed and we were unable to recover it. 00:30:38.969 [2024-04-17 06:56:43.460654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.460844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.460886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.969 qpair failed and we were unable to recover it. 00:30:38.969 [2024-04-17 06:56:43.461054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.969 [2024-04-17 06:56:43.461256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.461301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.970 qpair failed and we were unable to recover it. 00:30:38.970 [2024-04-17 06:56:43.461482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.461674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.461716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.970 qpair failed and we were unable to recover it. 00:30:38.970 [2024-04-17 06:56:43.461874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.462027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.462057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.970 qpair failed and we were unable to recover it. 00:30:38.970 [2024-04-17 06:56:43.462208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.462420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.462463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.970 qpair failed and we were unable to recover it. 00:30:38.970 [2024-04-17 06:56:43.462628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.462803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.462844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.970 qpair failed and we were unable to recover it. 00:30:38.970 [2024-04-17 06:56:43.463027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.463201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.463244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.970 qpair failed and we were unable to recover it. 00:30:38.970 [2024-04-17 06:56:43.463425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.463626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.463654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.970 qpair failed and we were unable to recover it. 00:30:38.970 [2024-04-17 06:56:43.463838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.463989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.464015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.970 qpair failed and we were unable to recover it. 00:30:38.970 [2024-04-17 06:56:43.464201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.464354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.464401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.970 qpair failed and we were unable to recover it. 00:30:38.970 [2024-04-17 06:56:43.464589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.464755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.464799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.970 qpair failed and we were unable to recover it. 00:30:38.970 [2024-04-17 06:56:43.464931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.465078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.465103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.970 qpair failed and we were unable to recover it. 00:30:38.970 [2024-04-17 06:56:43.465302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.465466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.465494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.970 qpair failed and we were unable to recover it. 00:30:38.970 [2024-04-17 06:56:43.465695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.465838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.465865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.970 qpair failed and we were unable to recover it. 00:30:38.970 [2024-04-17 06:56:43.465994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.466191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.466217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.970 qpair failed and we were unable to recover it. 00:30:38.970 [2024-04-17 06:56:43.466408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.466601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.466649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.970 qpair failed and we were unable to recover it. 00:30:38.970 [2024-04-17 06:56:43.466790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.466961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.466985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.970 qpair failed and we were unable to recover it. 00:30:38.970 [2024-04-17 06:56:43.467108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.467282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.467311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.970 qpair failed and we were unable to recover it. 00:30:38.970 [2024-04-17 06:56:43.467507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.467718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.467760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.970 qpair failed and we were unable to recover it. 00:30:38.970 [2024-04-17 06:56:43.467898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.468054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.468079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.970 qpair failed and we were unable to recover it. 00:30:38.970 [2024-04-17 06:56:43.468237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.468431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.468486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.970 qpair failed and we were unable to recover it. 00:30:38.970 [2024-04-17 06:56:43.468664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.468809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.468835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.970 qpair failed and we were unable to recover it. 00:30:38.970 [2024-04-17 06:56:43.468963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.469097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.469123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.970 qpair failed and we were unable to recover it. 00:30:38.970 [2024-04-17 06:56:43.469281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.469477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.469520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.970 qpair failed and we were unable to recover it. 00:30:38.970 [2024-04-17 06:56:43.469685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.469841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.469867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.970 qpair failed and we were unable to recover it. 00:30:38.970 [2024-04-17 06:56:43.469997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.470124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.470150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.970 qpair failed and we were unable to recover it. 00:30:38.970 [2024-04-17 06:56:43.470336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.470518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.470562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.970 qpair failed and we were unable to recover it. 00:30:38.970 [2024-04-17 06:56:43.470705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.470905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.470931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.970 qpair failed and we were unable to recover it. 00:30:38.970 [2024-04-17 06:56:43.471080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.471233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.970 [2024-04-17 06:56:43.471263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.970 qpair failed and we were unable to recover it. 00:30:38.971 [2024-04-17 06:56:43.471419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.471620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.471665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.971 qpair failed and we were unable to recover it. 00:30:38.971 [2024-04-17 06:56:43.471822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.471952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.471978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.971 qpair failed and we were unable to recover it. 00:30:38.971 [2024-04-17 06:56:43.472105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.472253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.472280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.971 qpair failed and we were unable to recover it. 00:30:38.971 [2024-04-17 06:56:43.472427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.472627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.472670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.971 qpair failed and we were unable to recover it. 00:30:38.971 [2024-04-17 06:56:43.472822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.472985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.473010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.971 qpair failed and we were unable to recover it. 00:30:38.971 [2024-04-17 06:56:43.473173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.473358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.473386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.971 qpair failed and we were unable to recover it. 00:30:38.971 [2024-04-17 06:56:43.473588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.473763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.473788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.971 qpair failed and we were unable to recover it. 00:30:38.971 [2024-04-17 06:56:43.473911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.474044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.474071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.971 qpair failed and we were unable to recover it. 00:30:38.971 [2024-04-17 06:56:43.474241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.474414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.474456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.971 qpair failed and we were unable to recover it. 00:30:38.971 [2024-04-17 06:56:43.474632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.474807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.474832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.971 qpair failed and we were unable to recover it. 00:30:38.971 [2024-04-17 06:56:43.475017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.475142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.475186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.971 qpair failed and we were unable to recover it. 00:30:38.971 [2024-04-17 06:56:43.475352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.475542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.475586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.971 qpair failed and we were unable to recover it. 00:30:38.971 [2024-04-17 06:56:43.475797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.475939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.475964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.971 qpair failed and we were unable to recover it. 00:30:38.971 [2024-04-17 06:56:43.476115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.476299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.476344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.971 qpair failed and we were unable to recover it. 00:30:38.971 [2024-04-17 06:56:43.476508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.476736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.476765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.971 qpair failed and we were unable to recover it. 00:30:38.971 [2024-04-17 06:56:43.476910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.477068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.477094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.971 qpair failed and we were unable to recover it. 00:30:38.971 [2024-04-17 06:56:43.477242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.477420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.477475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.971 qpair failed and we were unable to recover it. 00:30:38.971 [2024-04-17 06:56:43.477622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.477772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.477798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.971 qpair failed and we were unable to recover it. 00:30:38.971 [2024-04-17 06:56:43.477987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.478111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.478136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.971 qpair failed and we were unable to recover it. 00:30:38.971 [2024-04-17 06:56:43.478300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.478475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.478518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.971 qpair failed and we were unable to recover it. 00:30:38.971 [2024-04-17 06:56:43.478705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.478904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.478930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.971 qpair failed and we were unable to recover it. 00:30:38.971 [2024-04-17 06:56:43.479074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.479252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.479294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.971 qpair failed and we were unable to recover it. 00:30:38.971 [2024-04-17 06:56:43.479446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.479688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.479731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.971 qpair failed and we were unable to recover it. 00:30:38.971 [2024-04-17 06:56:43.479882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.480014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.480039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.971 qpair failed and we were unable to recover it. 00:30:38.971 [2024-04-17 06:56:43.480168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.480341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.480370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.971 qpair failed and we were unable to recover it. 00:30:38.971 [2024-04-17 06:56:43.480570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.480724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.480766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.971 qpair failed and we were unable to recover it. 00:30:38.971 [2024-04-17 06:56:43.480924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.481057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.481083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.971 qpair failed and we were unable to recover it. 00:30:38.971 [2024-04-17 06:56:43.481250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.481397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.481422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.971 qpair failed and we were unable to recover it. 00:30:38.971 [2024-04-17 06:56:43.481556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.481737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.971 [2024-04-17 06:56:43.481762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.972 qpair failed and we were unable to recover it. 00:30:38.972 [2024-04-17 06:56:43.481891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.482051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.482077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.972 qpair failed and we were unable to recover it. 00:30:38.972 [2024-04-17 06:56:43.482199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.482381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.482424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.972 qpair failed and we were unable to recover it. 00:30:38.972 [2024-04-17 06:56:43.482572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.482724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.482749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.972 qpair failed and we were unable to recover it. 00:30:38.972 [2024-04-17 06:56:43.482902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.483057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.483083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.972 qpair failed and we were unable to recover it. 00:30:38.972 [2024-04-17 06:56:43.483227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.483437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.483464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.972 qpair failed and we were unable to recover it. 00:30:38.972 [2024-04-17 06:56:43.483630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.483797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.483822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.972 qpair failed and we were unable to recover it. 00:30:38.972 [2024-04-17 06:56:43.483978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.484116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.484149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.972 qpair failed and we were unable to recover it. 00:30:38.972 [2024-04-17 06:56:43.484328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.484497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.484539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.972 qpair failed and we were unable to recover it. 00:30:38.972 [2024-04-17 06:56:43.484725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.484896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.484920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.972 qpair failed and we were unable to recover it. 00:30:38.972 [2024-04-17 06:56:43.485070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.485236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.485264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.972 qpair failed and we were unable to recover it. 00:30:38.972 [2024-04-17 06:56:43.485497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.485682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.485724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.972 qpair failed and we were unable to recover it. 00:30:38.972 [2024-04-17 06:56:43.485881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.486054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.486079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.972 qpair failed and we were unable to recover it. 00:30:38.972 [2024-04-17 06:56:43.486261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.486421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.486450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.972 qpair failed and we were unable to recover it. 00:30:38.972 [2024-04-17 06:56:43.486617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.486794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.486836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.972 qpair failed and we were unable to recover it. 00:30:38.972 [2024-04-17 06:56:43.486976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.487120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.487144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.972 qpair failed and we were unable to recover it. 00:30:38.972 [2024-04-17 06:56:43.487296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.487486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.487528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.972 qpair failed and we were unable to recover it. 00:30:38.972 [2024-04-17 06:56:43.487684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.487873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.487898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.972 qpair failed and we were unable to recover it. 00:30:38.972 [2024-04-17 06:56:43.488048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.488229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.488258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.972 qpair failed and we were unable to recover it. 00:30:38.972 [2024-04-17 06:56:43.488452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.488653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.488693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.972 qpair failed and we were unable to recover it. 00:30:38.972 [2024-04-17 06:56:43.488853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.489015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.489039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.972 qpair failed and we were unable to recover it. 00:30:38.972 [2024-04-17 06:56:43.489169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.489332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.489375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.972 qpair failed and we were unable to recover it. 00:30:38.972 [2024-04-17 06:56:43.489526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.489720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.489763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.972 qpair failed and we were unable to recover it. 00:30:38.972 [2024-04-17 06:56:43.489883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.490031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.490056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.972 qpair failed and we were unable to recover it. 00:30:38.972 [2024-04-17 06:56:43.490256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.490420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.490447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.972 qpair failed and we were unable to recover it. 00:30:38.972 [2024-04-17 06:56:43.490647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.490809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.490836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.972 qpair failed and we were unable to recover it. 00:30:38.972 [2024-04-17 06:56:43.490964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.491093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.491118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.972 qpair failed and we were unable to recover it. 00:30:38.972 [2024-04-17 06:56:43.491282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.491426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.491451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.972 qpair failed and we were unable to recover it. 00:30:38.972 [2024-04-17 06:56:43.491616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.491762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.491805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.972 qpair failed and we were unable to recover it. 00:30:38.972 [2024-04-17 06:56:43.491965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.972 [2024-04-17 06:56:43.492092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.492119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.973 qpair failed and we were unable to recover it. 00:30:38.973 [2024-04-17 06:56:43.492278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.492415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.492439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.973 qpair failed and we were unable to recover it. 00:30:38.973 [2024-04-17 06:56:43.492631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.492787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.492812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.973 qpair failed and we were unable to recover it. 00:30:38.973 [2024-04-17 06:56:43.492967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.493103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.493128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.973 qpair failed and we were unable to recover it. 00:30:38.973 [2024-04-17 06:56:43.493285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.493441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.493485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.973 qpair failed and we were unable to recover it. 00:30:38.973 [2024-04-17 06:56:43.493670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.493838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.493881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.973 qpair failed and we were unable to recover it. 00:30:38.973 [2024-04-17 06:56:43.494036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.494189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.494214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.973 qpair failed and we were unable to recover it. 00:30:38.973 [2024-04-17 06:56:43.494371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.494534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.494575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.973 qpair failed and we were unable to recover it. 00:30:38.973 [2024-04-17 06:56:43.494729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.494880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.494904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.973 qpair failed and we were unable to recover it. 00:30:38.973 [2024-04-17 06:56:43.495033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.495262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.495305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.973 qpair failed and we were unable to recover it. 00:30:38.973 [2024-04-17 06:56:43.495477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.495684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.495727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.973 qpair failed and we were unable to recover it. 00:30:38.973 [2024-04-17 06:56:43.495882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.496015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.496040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.973 qpair failed and we were unable to recover it. 00:30:38.973 [2024-04-17 06:56:43.496172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.496366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.496409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.973 qpair failed and we were unable to recover it. 00:30:38.973 [2024-04-17 06:56:43.496566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.496746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.496773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.973 qpair failed and we were unable to recover it. 00:30:38.973 [2024-04-17 06:56:43.496930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.497061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.497088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.973 qpair failed and we were unable to recover it. 00:30:38.973 [2024-04-17 06:56:43.497276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.497456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.497513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.973 qpair failed and we were unable to recover it. 00:30:38.973 [2024-04-17 06:56:43.497668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.497813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.497839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.973 qpair failed and we were unable to recover it. 00:30:38.973 [2024-04-17 06:56:43.497962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.498100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.498124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.973 qpair failed and we were unable to recover it. 00:30:38.973 [2024-04-17 06:56:43.498283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.498457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.498502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.973 qpair failed and we were unable to recover it. 00:30:38.973 [2024-04-17 06:56:43.498640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.498782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.498809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.973 qpair failed and we were unable to recover it. 00:30:38.973 [2024-04-17 06:56:43.498964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.499118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.499142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.973 qpair failed and we were unable to recover it. 00:30:38.973 [2024-04-17 06:56:43.499335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.499536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.499580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.973 qpair failed and we were unable to recover it. 00:30:38.973 [2024-04-17 06:56:43.499737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.499870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.499896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.973 qpair failed and we were unable to recover it. 00:30:38.973 [2024-04-17 06:56:43.500031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.500256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.500301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.973 qpair failed and we were unable to recover it. 00:30:38.973 [2024-04-17 06:56:43.500488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.500644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.500669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.973 qpair failed and we were unable to recover it. 00:30:38.973 [2024-04-17 06:56:43.500823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.500954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.500978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.973 qpair failed and we were unable to recover it. 00:30:38.973 [2024-04-17 06:56:43.501102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.501266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.501309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.973 qpair failed and we were unable to recover it. 00:30:38.973 [2024-04-17 06:56:43.501482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.501634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.501658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.973 qpair failed and we were unable to recover it. 00:30:38.973 [2024-04-17 06:56:43.501792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.501964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.973 [2024-04-17 06:56:43.501989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.973 qpair failed and we were unable to recover it. 00:30:38.973 [2024-04-17 06:56:43.502125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.502307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.502352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.974 qpair failed and we were unable to recover it. 00:30:38.974 [2024-04-17 06:56:43.502503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.502736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.502787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.974 qpair failed and we were unable to recover it. 00:30:38.974 [2024-04-17 06:56:43.503577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.503780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.503815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.974 qpair failed and we were unable to recover it. 00:30:38.974 [2024-04-17 06:56:43.504027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.504157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.504193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.974 qpair failed and we were unable to recover it. 00:30:38.974 [2024-04-17 06:56:43.504350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.504556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.504599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.974 qpair failed and we were unable to recover it. 00:30:38.974 [2024-04-17 06:56:43.504767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.504951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.504976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.974 qpair failed and we were unable to recover it. 00:30:38.974 [2024-04-17 06:56:43.505108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.505302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.505346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.974 qpair failed and we were unable to recover it. 00:30:38.974 [2024-04-17 06:56:43.505502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.505719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.505745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.974 qpair failed and we were unable to recover it. 00:30:38.974 [2024-04-17 06:56:43.505864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.506018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.506042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.974 qpair failed and we were unable to recover it. 00:30:38.974 [2024-04-17 06:56:43.506199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.506370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.506417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.974 qpair failed and we were unable to recover it. 00:30:38.974 [2024-04-17 06:56:43.506614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.506810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.506852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.974 qpair failed and we were unable to recover it. 00:30:38.974 [2024-04-17 06:56:43.506976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.507136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.507161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.974 qpair failed and we were unable to recover it. 00:30:38.974 [2024-04-17 06:56:43.507381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.507608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.507651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.974 qpair failed and we were unable to recover it. 00:30:38.974 [2024-04-17 06:56:43.507869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.508045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.508069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.974 qpair failed and we were unable to recover it. 00:30:38.974 [2024-04-17 06:56:43.508265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.508464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.508504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.974 qpair failed and we were unable to recover it. 00:30:38.974 [2024-04-17 06:56:43.508688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.508869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.508910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.974 qpair failed and we were unable to recover it. 00:30:38.974 [2024-04-17 06:56:43.509085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.509229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.509256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.974 qpair failed and we were unable to recover it. 00:30:38.974 [2024-04-17 06:56:43.509405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.509546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.509580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.974 qpair failed and we were unable to recover it. 00:30:38.974 [2024-04-17 06:56:43.509756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.509904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.509928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.974 qpair failed and we were unable to recover it. 00:30:38.974 [2024-04-17 06:56:43.510079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.510204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.510234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.974 qpair failed and we were unable to recover it. 00:30:38.974 [2024-04-17 06:56:43.510404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.510614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.510640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.974 qpair failed and we were unable to recover it. 00:30:38.974 [2024-04-17 06:56:43.510860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.511015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.511040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.974 qpair failed and we were unable to recover it. 00:30:38.974 [2024-04-17 06:56:43.511191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.511348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.511389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.974 qpair failed and we were unable to recover it. 00:30:38.974 [2024-04-17 06:56:43.511596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.511813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.511854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.974 qpair failed and we were unable to recover it. 00:30:38.974 [2024-04-17 06:56:43.512037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.512187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.974 [2024-04-17 06:56:43.512212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.975 qpair failed and we were unable to recover it. 00:30:38.975 [2024-04-17 06:56:43.512398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.512567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.512611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.975 qpair failed and we were unable to recover it. 00:30:38.975 [2024-04-17 06:56:43.512779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.512955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.512980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.975 qpair failed and we were unable to recover it. 00:30:38.975 [2024-04-17 06:56:43.513164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.513909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.513939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.975 qpair failed and we were unable to recover it. 00:30:38.975 [2024-04-17 06:56:43.514139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.514314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.514340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.975 qpair failed and we were unable to recover it. 00:30:38.975 [2024-04-17 06:56:43.514505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.514731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.514776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.975 qpair failed and we were unable to recover it. 00:30:38.975 [2024-04-17 06:56:43.514989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.515129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.515153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.975 qpair failed and we were unable to recover it. 00:30:38.975 [2024-04-17 06:56:43.515299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.515484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.515525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.975 qpair failed and we were unable to recover it. 00:30:38.975 [2024-04-17 06:56:43.515675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.515867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.515909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.975 qpair failed and we were unable to recover it. 00:30:38.975 [2024-04-17 06:56:43.516092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.516297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.516340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.975 qpair failed and we were unable to recover it. 00:30:38.975 [2024-04-17 06:56:43.516496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.516688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.516729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.975 qpair failed and we were unable to recover it. 00:30:38.975 [2024-04-17 06:56:43.516897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.517047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.517073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.975 qpair failed and we were unable to recover it. 00:30:38.975 [2024-04-17 06:56:43.517292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.517462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.517505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.975 qpair failed and we were unable to recover it. 00:30:38.975 [2024-04-17 06:56:43.518288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.518501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.518544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.975 qpair failed and we were unable to recover it. 00:30:38.975 [2024-04-17 06:56:43.518727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.518876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.518904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.975 qpair failed and we were unable to recover it. 00:30:38.975 [2024-04-17 06:56:43.519040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.519197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.519227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.975 qpair failed and we were unable to recover it. 00:30:38.975 [2024-04-17 06:56:43.519358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.519535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.519562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.975 qpair failed and we were unable to recover it. 00:30:38.975 [2024-04-17 06:56:43.519736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.519890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.519916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.975 qpair failed and we were unable to recover it. 00:30:38.975 [2024-04-17 06:56:43.520096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.520261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.520303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.975 qpair failed and we were unable to recover it. 00:30:38.975 [2024-04-17 06:56:43.520496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.520656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.520696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.975 qpair failed and we were unable to recover it. 00:30:38.975 [2024-04-17 06:56:43.520824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.520957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.520984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.975 qpair failed and we were unable to recover it. 00:30:38.975 [2024-04-17 06:56:43.521116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.521304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.521331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.975 qpair failed and we were unable to recover it. 00:30:38.975 [2024-04-17 06:56:43.521465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.521596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.521620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.975 qpair failed and we were unable to recover it. 00:30:38.975 [2024-04-17 06:56:43.521743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.521899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.521925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.975 qpair failed and we were unable to recover it. 00:30:38.975 [2024-04-17 06:56:43.522081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.522267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.522308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.975 qpair failed and we were unable to recover it. 00:30:38.975 [2024-04-17 06:56:43.522466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.522649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.522689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.975 qpair failed and we were unable to recover it. 00:30:38.975 [2024-04-17 06:56:43.522824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.522977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.523002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.975 qpair failed and we were unable to recover it. 00:30:38.975 [2024-04-17 06:56:43.523129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.523269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.523295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.975 qpair failed and we were unable to recover it. 00:30:38.975 [2024-04-17 06:56:43.523419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.523554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.523579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.975 qpair failed and we were unable to recover it. 00:30:38.975 [2024-04-17 06:56:43.523717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.975 [2024-04-17 06:56:43.523845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.523869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.976 qpair failed and we were unable to recover it. 00:30:38.976 [2024-04-17 06:56:43.524016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.524139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.524164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.976 qpair failed and we were unable to recover it. 00:30:38.976 [2024-04-17 06:56:43.524332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.524467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.524493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.976 qpair failed and we were unable to recover it. 00:30:38.976 [2024-04-17 06:56:43.524649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.524819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.524846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.976 qpair failed and we were unable to recover it. 00:30:38.976 [2024-04-17 06:56:43.525001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.525187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.525213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.976 qpair failed and we were unable to recover it. 00:30:38.976 [2024-04-17 06:56:43.525370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.525566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.525591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.976 qpair failed and we were unable to recover it. 00:30:38.976 [2024-04-17 06:56:43.525758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.525892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.525919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.976 qpair failed and we were unable to recover it. 00:30:38.976 [2024-04-17 06:56:43.526054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.526185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.526211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.976 qpair failed and we were unable to recover it. 00:30:38.976 [2024-04-17 06:56:43.526341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.526464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.526489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.976 qpair failed and we were unable to recover it. 00:30:38.976 [2024-04-17 06:56:43.526686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.526856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.526881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.976 qpair failed and we were unable to recover it. 00:30:38.976 [2024-04-17 06:56:43.527034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.527213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.527239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.976 qpair failed and we were unable to recover it. 00:30:38.976 [2024-04-17 06:56:43.527374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.527539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.527564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.976 qpair failed and we were unable to recover it. 00:30:38.976 [2024-04-17 06:56:43.527727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.527911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.527936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.976 qpair failed and we were unable to recover it. 00:30:38.976 [2024-04-17 06:56:43.528085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.528247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.528272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.976 qpair failed and we were unable to recover it. 00:30:38.976 [2024-04-17 06:56:43.528433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.528563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.528590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.976 qpair failed and we were unable to recover it. 00:30:38.976 [2024-04-17 06:56:43.528717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.528893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.528918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.976 qpair failed and we were unable to recover it. 00:30:38.976 [2024-04-17 06:56:43.529067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.529237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.529265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.976 qpair failed and we were unable to recover it. 00:30:38.976 [2024-04-17 06:56:43.529403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.529594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.529625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.976 qpair failed and we were unable to recover it. 00:30:38.976 [2024-04-17 06:56:43.529784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.529917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.529941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.976 qpair failed and we were unable to recover it. 00:30:38.976 [2024-04-17 06:56:43.530120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.530283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.530313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.976 qpair failed and we were unable to recover it. 00:30:38.976 [2024-04-17 06:56:43.530437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.530628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.530654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.976 qpair failed and we were unable to recover it. 00:30:38.976 [2024-04-17 06:56:43.530803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.530972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.530996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.976 qpair failed and we were unable to recover it. 00:30:38.976 [2024-04-17 06:56:43.531191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.531326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.531351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.976 qpair failed and we were unable to recover it. 00:30:38.976 [2024-04-17 06:56:43.531506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.531628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.531655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.976 qpair failed and we were unable to recover it. 00:30:38.976 [2024-04-17 06:56:43.531783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.531937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.531962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.976 qpair failed and we were unable to recover it. 00:30:38.976 [2024-04-17 06:56:43.532146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.532284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.532309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.976 qpair failed and we were unable to recover it. 00:30:38.976 [2024-04-17 06:56:43.532471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.532625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.976 [2024-04-17 06:56:43.532650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:38.976 qpair failed and we were unable to recover it. 00:30:38.976 [2024-04-17 06:56:43.532805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.259 [2024-04-17 06:56:43.532957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.259 [2024-04-17 06:56:43.532991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.259 qpair failed and we were unable to recover it. 00:30:39.259 [2024-04-17 06:56:43.533144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.259 [2024-04-17 06:56:43.533279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.259 [2024-04-17 06:56:43.533305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.259 qpair failed and we were unable to recover it. 00:30:39.259 [2024-04-17 06:56:43.533445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.259 [2024-04-17 06:56:43.533574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.259 [2024-04-17 06:56:43.533599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.259 qpair failed and we were unable to recover it. 00:30:39.259 [2024-04-17 06:56:43.533728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.259 [2024-04-17 06:56:43.533858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.259 [2024-04-17 06:56:43.533883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.259 qpair failed and we were unable to recover it. 00:30:39.259 [2024-04-17 06:56:43.534009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.259 [2024-04-17 06:56:43.534135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.259 [2024-04-17 06:56:43.534160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.259 qpair failed and we were unable to recover it. 00:30:39.259 [2024-04-17 06:56:43.534296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.259 [2024-04-17 06:56:43.534455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.259 [2024-04-17 06:56:43.534488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.259 qpair failed and we were unable to recover it. 00:30:39.259 [2024-04-17 06:56:43.534649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.259 [2024-04-17 06:56:43.534802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.259 [2024-04-17 06:56:43.534827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.259 qpair failed and we were unable to recover it. 00:30:39.259 [2024-04-17 06:56:43.534953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.259 [2024-04-17 06:56:43.535114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.259 [2024-04-17 06:56:43.535141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.259 qpair failed and we were unable to recover it. 00:30:39.259 [2024-04-17 06:56:43.535307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.259 [2024-04-17 06:56:43.535465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.259 [2024-04-17 06:56:43.535490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.259 qpair failed and we were unable to recover it. 00:30:39.259 [2024-04-17 06:56:43.535620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.259 [2024-04-17 06:56:43.535775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.259 [2024-04-17 06:56:43.535800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.259 qpair failed and we were unable to recover it. 00:30:39.260 [2024-04-17 06:56:43.535982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.536169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.536202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.260 qpair failed and we were unable to recover it. 00:30:39.260 [2024-04-17 06:56:43.536371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.536531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.536558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.260 qpair failed and we were unable to recover it. 00:30:39.260 [2024-04-17 06:56:43.536717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.536870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.536895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.260 qpair failed and we were unable to recover it. 00:30:39.260 [2024-04-17 06:56:43.537055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.537207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.537234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.260 qpair failed and we were unable to recover it. 00:30:39.260 [2024-04-17 06:56:43.537387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.537546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.537571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.260 qpair failed and we were unable to recover it. 00:30:39.260 [2024-04-17 06:56:43.537727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.537870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.537896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.260 qpair failed and we were unable to recover it. 00:30:39.260 [2024-04-17 06:56:43.538024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.538152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.538184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.260 qpair failed and we were unable to recover it. 00:30:39.260 [2024-04-17 06:56:43.538321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.538475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.538505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.260 qpair failed and we were unable to recover it. 00:30:39.260 [2024-04-17 06:56:43.538669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.538797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.538822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.260 qpair failed and we were unable to recover it. 00:30:39.260 [2024-04-17 06:56:43.538976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.539110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.539136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.260 qpair failed and we were unable to recover it. 00:30:39.260 [2024-04-17 06:56:43.539362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.539493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.539518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.260 qpair failed and we were unable to recover it. 00:30:39.260 [2024-04-17 06:56:43.539647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.539800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.539824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.260 qpair failed and we were unable to recover it. 00:30:39.260 [2024-04-17 06:56:43.539977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.540126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.540151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.260 qpair failed and we were unable to recover it. 00:30:39.260 [2024-04-17 06:56:43.540302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.540428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.540454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.260 qpair failed and we were unable to recover it. 00:30:39.260 [2024-04-17 06:56:43.540635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.540786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.540812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.260 qpair failed and we were unable to recover it. 00:30:39.260 [2024-04-17 06:56:43.540966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.541120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.541145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.260 qpair failed and we were unable to recover it. 00:30:39.260 [2024-04-17 06:56:43.541293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.541449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.541473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.260 qpair failed and we were unable to recover it. 00:30:39.260 [2024-04-17 06:56:43.541646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.541805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.541832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.260 qpair failed and we were unable to recover it. 00:30:39.260 [2024-04-17 06:56:43.541986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.542151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.542184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.260 qpair failed and we were unable to recover it. 00:30:39.260 [2024-04-17 06:56:43.542348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.542474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.542500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.260 qpair failed and we were unable to recover it. 00:30:39.260 [2024-04-17 06:56:43.542683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.542822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.542849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.260 qpair failed and we were unable to recover it. 00:30:39.260 [2024-04-17 06:56:43.542984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.543114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.543140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.260 qpair failed and we were unable to recover it. 00:30:39.260 [2024-04-17 06:56:43.543308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.543468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.543499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.260 qpair failed and we were unable to recover it. 00:30:39.260 [2024-04-17 06:56:43.543674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.543806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.543830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.260 qpair failed and we were unable to recover it. 00:30:39.260 [2024-04-17 06:56:43.543958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.544094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.544120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.260 qpair failed and we were unable to recover it. 00:30:39.260 [2024-04-17 06:56:43.544288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.544423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.544449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.260 qpair failed and we were unable to recover it. 00:30:39.260 [2024-04-17 06:56:43.544606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.544767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.544792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.260 qpair failed and we were unable to recover it. 00:30:39.260 [2024-04-17 06:56:43.544949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.545084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.545110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.260 qpair failed and we were unable to recover it. 00:30:39.260 [2024-04-17 06:56:43.545281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.260 [2024-04-17 06:56:43.545407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.545432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.261 qpair failed and we were unable to recover it. 00:30:39.261 [2024-04-17 06:56:43.545620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.545766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.545791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.261 qpair failed and we were unable to recover it. 00:30:39.261 [2024-04-17 06:56:43.545960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.546084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.546110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.261 qpair failed and we were unable to recover it. 00:30:39.261 [2024-04-17 06:56:43.546272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.546401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.546426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.261 qpair failed and we were unable to recover it. 00:30:39.261 [2024-04-17 06:56:43.546583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.546745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.546770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.261 qpair failed and we were unable to recover it. 00:30:39.261 [2024-04-17 06:56:43.546895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.547030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.547054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.261 qpair failed and we were unable to recover it. 00:30:39.261 [2024-04-17 06:56:43.547182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.547311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.547336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.261 qpair failed and we were unable to recover it. 00:30:39.261 [2024-04-17 06:56:43.547461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.547622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.547646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.261 qpair failed and we were unable to recover it. 00:30:39.261 [2024-04-17 06:56:43.547803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.547944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.547968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.261 qpair failed and we were unable to recover it. 00:30:39.261 [2024-04-17 06:56:43.548135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.548284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.548308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.261 qpair failed and we were unable to recover it. 00:30:39.261 [2024-04-17 06:56:43.548433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.548586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.548609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.261 qpair failed and we were unable to recover it. 00:30:39.261 [2024-04-17 06:56:43.548799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.548946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.548971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.261 qpair failed and we were unable to recover it. 00:30:39.261 [2024-04-17 06:56:43.549127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.549270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.549294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.261 qpair failed and we were unable to recover it. 00:30:39.261 [2024-04-17 06:56:43.549480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.549605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.549631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.261 qpair failed and we were unable to recover it. 00:30:39.261 [2024-04-17 06:56:43.549753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.549896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.549921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.261 qpair failed and we were unable to recover it. 00:30:39.261 [2024-04-17 06:56:43.550041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.550213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.550239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.261 qpair failed and we were unable to recover it. 00:30:39.261 [2024-04-17 06:56:43.550369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.550502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.550528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.261 qpair failed and we were unable to recover it. 00:30:39.261 [2024-04-17 06:56:43.550695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.550875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.550900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.261 qpair failed and we were unable to recover it. 00:30:39.261 [2024-04-17 06:56:43.551076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.551246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.551272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.261 qpair failed and we were unable to recover it. 00:30:39.261 [2024-04-17 06:56:43.551424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.551569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.551593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.261 qpair failed and we were unable to recover it. 00:30:39.261 [2024-04-17 06:56:43.551719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.551864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.551889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.261 qpair failed and we were unable to recover it. 00:30:39.261 [2024-04-17 06:56:43.552015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.552186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.552211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.261 qpair failed and we were unable to recover it. 00:30:39.261 [2024-04-17 06:56:43.552364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.552493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.552518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.261 qpair failed and we were unable to recover it. 00:30:39.261 [2024-04-17 06:56:43.552702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.552830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.552854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.261 qpair failed and we were unable to recover it. 00:30:39.261 [2024-04-17 06:56:43.553006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.553162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.553194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.261 qpair failed and we were unable to recover it. 00:30:39.261 [2024-04-17 06:56:43.553347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.553495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.553520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.261 qpair failed and we were unable to recover it. 00:30:39.261 [2024-04-17 06:56:43.553679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.553812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.553838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.261 qpair failed and we were unable to recover it. 00:30:39.261 [2024-04-17 06:56:43.553988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.554142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.554167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.261 qpair failed and we were unable to recover it. 00:30:39.261 [2024-04-17 06:56:43.554315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.554466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.261 [2024-04-17 06:56:43.554490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.261 qpair failed and we were unable to recover it. 00:30:39.261 [2024-04-17 06:56:43.554620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.554778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.554802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-04-17 06:56:43.554925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.555076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.555100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-04-17 06:56:43.555261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.555419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.555444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-04-17 06:56:43.555579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.555743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.555767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-04-17 06:56:43.555921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.556076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.556101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-04-17 06:56:43.556243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.556374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.556398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-04-17 06:56:43.556589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.556746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.556771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-04-17 06:56:43.556900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.557037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.557062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-04-17 06:56:43.557211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.557353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.557378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-04-17 06:56:43.557503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.557664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.557693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-04-17 06:56:43.557844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.557991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.558015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-04-17 06:56:43.558138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.558295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.558322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-04-17 06:56:43.558452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.559233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.559263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-04-17 06:56:43.559407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.559571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.559597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-04-17 06:56:43.559746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.559901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.559933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-04-17 06:56:43.560125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.560261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.560287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-04-17 06:56:43.560423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.560593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.560618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-04-17 06:56:43.560777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.560921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.560946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-04-17 06:56:43.561084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.561253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.561279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-04-17 06:56:43.561404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.561532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.561556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-04-17 06:56:43.561711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.561870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.561894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-04-17 06:56:43.562044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.562171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.562205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-04-17 06:56:43.562355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.562479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.562513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-04-17 06:56:43.562664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.562798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.562827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-04-17 06:56:43.562962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.563111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.563136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-04-17 06:56:43.563269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.563425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.563450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-04-17 06:56:43.563603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.563758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.563783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-04-17 06:56:43.563936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.564071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.564095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-04-17 06:56:43.564229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.564371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.262 [2024-04-17 06:56:43.564396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-04-17 06:56:43.564577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.564734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.564774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-04-17 06:56:43.565052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.565206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.565244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-04-17 06:56:43.565381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.565539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.565563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-04-17 06:56:43.565690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.565817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.565841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-04-17 06:56:43.566024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.566149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.566190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-04-17 06:56:43.566331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.566485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.566509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-04-17 06:56:43.566680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.566838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.566862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-04-17 06:56:43.567019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.567218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.567244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-04-17 06:56:43.567372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.567495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.567520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-04-17 06:56:43.567681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.567840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.567865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-04-17 06:56:43.568021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.568186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.568212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-04-17 06:56:43.568364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.568493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.568520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-04-17 06:56:43.568703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.568876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.568900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-04-17 06:56:43.569083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.569251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.569275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-04-17 06:56:43.569430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.569556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.569585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-04-17 06:56:43.569751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.569883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.569908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-04-17 06:56:43.570057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.570212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.570238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-04-17 06:56:43.570380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.570524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.570548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-04-17 06:56:43.570677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.570797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.570821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-04-17 06:56:43.570982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.571109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.571135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-04-17 06:56:43.571300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.571460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.571495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-04-17 06:56:43.571653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.571796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.571819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-04-17 06:56:43.571961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.572085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.572112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-04-17 06:56:43.572272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.572412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.572436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-04-17 06:56:43.572571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.572721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.572749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-04-17 06:56:43.572903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.573053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.573078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-04-17 06:56:43.573237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.573367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.573391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-04-17 06:56:43.573545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.573670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.263 [2024-04-17 06:56:43.573694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-04-17 06:56:43.573847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.573989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.574014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.264 qpair failed and we were unable to recover it. 00:30:39.264 [2024-04-17 06:56:43.574146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.574296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.574324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.264 qpair failed and we were unable to recover it. 00:30:39.264 [2024-04-17 06:56:43.574455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.574620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.574645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.264 qpair failed and we were unable to recover it. 00:30:39.264 [2024-04-17 06:56:43.574805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.574929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.574955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.264 qpair failed and we were unable to recover it. 00:30:39.264 [2024-04-17 06:56:43.575082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.575235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.575261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.264 qpair failed and we were unable to recover it. 00:30:39.264 [2024-04-17 06:56:43.575419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.575564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.575589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.264 qpair failed and we were unable to recover it. 00:30:39.264 [2024-04-17 06:56:43.575746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.575907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.575932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.264 qpair failed and we were unable to recover it. 00:30:39.264 [2024-04-17 06:56:43.576090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.576224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.576250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.264 qpair failed and we were unable to recover it. 00:30:39.264 [2024-04-17 06:56:43.576383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.576512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.576537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.264 qpair failed and we were unable to recover it. 00:30:39.264 [2024-04-17 06:56:43.576669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.576820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.576844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.264 qpair failed and we were unable to recover it. 00:30:39.264 [2024-04-17 06:56:43.576995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.577124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.577150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.264 qpair failed and we were unable to recover it. 00:30:39.264 [2024-04-17 06:56:43.577287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.577409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.577434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.264 qpair failed and we were unable to recover it. 00:30:39.264 [2024-04-17 06:56:43.577590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.577744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.577770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.264 qpair failed and we were unable to recover it. 00:30:39.264 [2024-04-17 06:56:43.577952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.578123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.578148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.264 qpair failed and we were unable to recover it. 00:30:39.264 [2024-04-17 06:56:43.578303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.578431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.578455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.264 qpair failed and we were unable to recover it. 00:30:39.264 [2024-04-17 06:56:43.578589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.578775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.578800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.264 qpair failed and we were unable to recover it. 00:30:39.264 [2024-04-17 06:56:43.578946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.579082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.579109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.264 qpair failed and we were unable to recover it. 00:30:39.264 [2024-04-17 06:56:43.579285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.579422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.579448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.264 qpair failed and we were unable to recover it. 00:30:39.264 [2024-04-17 06:56:43.579585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.579746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.579771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.264 qpair failed and we were unable to recover it. 00:30:39.264 [2024-04-17 06:56:43.579895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.580025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.580051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.264 qpair failed and we were unable to recover it. 00:30:39.264 [2024-04-17 06:56:43.580209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.580340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.580365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.264 qpair failed and we were unable to recover it. 00:30:39.264 [2024-04-17 06:56:43.580496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.580651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.580676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.264 qpair failed and we were unable to recover it. 00:30:39.264 [2024-04-17 06:56:43.580827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.580977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.264 [2024-04-17 06:56:43.581001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.264 qpair failed and we were unable to recover it. 00:30:39.264 [2024-04-17 06:56:43.581154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.581336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.581361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.265 qpair failed and we were unable to recover it. 00:30:39.265 [2024-04-17 06:56:43.581495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.581656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.581682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.265 qpair failed and we were unable to recover it. 00:30:39.265 [2024-04-17 06:56:43.581854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.582006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.582031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.265 qpair failed and we were unable to recover it. 00:30:39.265 [2024-04-17 06:56:43.582157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.582313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.582340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.265 qpair failed and we were unable to recover it. 00:30:39.265 [2024-04-17 06:56:43.582477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.582645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.582670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.265 qpair failed and we were unable to recover it. 00:30:39.265 [2024-04-17 06:56:43.582828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.582960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.582987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.265 qpair failed and we were unable to recover it. 00:30:39.265 [2024-04-17 06:56:43.583148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.583289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.583315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.265 qpair failed and we were unable to recover it. 00:30:39.265 [2024-04-17 06:56:43.583478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.583638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.583662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.265 qpair failed and we were unable to recover it. 00:30:39.265 [2024-04-17 06:56:43.583901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.584044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.584069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.265 qpair failed and we were unable to recover it. 00:30:39.265 [2024-04-17 06:56:43.584202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.584325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.584350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.265 qpair failed and we were unable to recover it. 00:30:39.265 [2024-04-17 06:56:43.584474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.584593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.584617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.265 qpair failed and we were unable to recover it. 00:30:39.265 [2024-04-17 06:56:43.584745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.584882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.584907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.265 qpair failed and we were unable to recover it. 00:30:39.265 [2024-04-17 06:56:43.585059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.585203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.585229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.265 qpair failed and we were unable to recover it. 00:30:39.265 [2024-04-17 06:56:43.585363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.585492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.585517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.265 qpair failed and we were unable to recover it. 00:30:39.265 [2024-04-17 06:56:43.585647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.585800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.585825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.265 qpair failed and we were unable to recover it. 00:30:39.265 [2024-04-17 06:56:43.586060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.586199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.586229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.265 qpair failed and we were unable to recover it. 00:30:39.265 [2024-04-17 06:56:43.586377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.586501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.586528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.265 qpair failed and we were unable to recover it. 00:30:39.265 [2024-04-17 06:56:43.586661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.586795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.586819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.265 qpair failed and we were unable to recover it. 00:30:39.265 [2024-04-17 06:56:43.586939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.587065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.587089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.265 qpair failed and we were unable to recover it. 00:30:39.265 [2024-04-17 06:56:43.587264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.587400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.587425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.265 qpair failed and we were unable to recover it. 00:30:39.265 [2024-04-17 06:56:43.587604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.587726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.587751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.265 qpair failed and we were unable to recover it. 00:30:39.265 [2024-04-17 06:56:43.587886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.588039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.588064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.265 qpair failed and we were unable to recover it. 00:30:39.265 [2024-04-17 06:56:43.588194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.588322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.588346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.265 qpair failed and we were unable to recover it. 00:30:39.265 [2024-04-17 06:56:43.588499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.588644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.588669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.265 qpair failed and we were unable to recover it. 00:30:39.265 [2024-04-17 06:56:43.588814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.588967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.588992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.265 qpair failed and we were unable to recover it. 00:30:39.265 [2024-04-17 06:56:43.589151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.589286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.589311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.265 qpair failed and we were unable to recover it. 00:30:39.265 [2024-04-17 06:56:43.589436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.589560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.589585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.265 qpair failed and we were unable to recover it. 00:30:39.265 [2024-04-17 06:56:43.589744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.589981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.590006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.265 qpair failed and we were unable to recover it. 00:30:39.265 [2024-04-17 06:56:43.590129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.590270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.265 [2024-04-17 06:56:43.590296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.265 qpair failed and we were unable to recover it. 00:30:39.266 [2024-04-17 06:56:43.590532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.590656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.590681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.266 qpair failed and we were unable to recover it. 00:30:39.266 [2024-04-17 06:56:43.590832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.590989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.591013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.266 qpair failed and we were unable to recover it. 00:30:39.266 [2024-04-17 06:56:43.591142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.591305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.591331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.266 qpair failed and we were unable to recover it. 00:30:39.266 [2024-04-17 06:56:43.591458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.591619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.591644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.266 qpair failed and we were unable to recover it. 00:30:39.266 [2024-04-17 06:56:43.591793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.591916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.591941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.266 qpair failed and we were unable to recover it. 00:30:39.266 [2024-04-17 06:56:43.592127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.592298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.592324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.266 qpair failed and we were unable to recover it. 00:30:39.266 [2024-04-17 06:56:43.592480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.592637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.592662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.266 qpair failed and we were unable to recover it. 00:30:39.266 [2024-04-17 06:56:43.592814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.592976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.593001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.266 qpair failed and we were unable to recover it. 00:30:39.266 [2024-04-17 06:56:43.593148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.593290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.593316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.266 qpair failed and we were unable to recover it. 00:30:39.266 [2024-04-17 06:56:43.593464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.593621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.593646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.266 qpair failed and we were unable to recover it. 00:30:39.266 [2024-04-17 06:56:43.593776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.593926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.593951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.266 qpair failed and we were unable to recover it. 00:30:39.266 [2024-04-17 06:56:43.594127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.594294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.594319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.266 qpair failed and we were unable to recover it. 00:30:39.266 [2024-04-17 06:56:43.594478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.594601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.594627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.266 qpair failed and we were unable to recover it. 00:30:39.266 [2024-04-17 06:56:43.594809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.594967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.594992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.266 qpair failed and we were unable to recover it. 00:30:39.266 [2024-04-17 06:56:43.595142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.595286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.595311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.266 qpair failed and we were unable to recover it. 00:30:39.266 [2024-04-17 06:56:43.595477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.595652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.595677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.266 qpair failed and we were unable to recover it. 00:30:39.266 [2024-04-17 06:56:43.595832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.595989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.596014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.266 qpair failed and we were unable to recover it. 00:30:39.266 [2024-04-17 06:56:43.596164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.596304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.596330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.266 qpair failed and we were unable to recover it. 00:30:39.266 [2024-04-17 06:56:43.596492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.596655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.596680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.266 qpair failed and we were unable to recover it. 00:30:39.266 [2024-04-17 06:56:43.596837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.596963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.596988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.266 qpair failed and we were unable to recover it. 00:30:39.266 [2024-04-17 06:56:43.597149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.597314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.597341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.266 qpair failed and we were unable to recover it. 00:30:39.266 [2024-04-17 06:56:43.597504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.597645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.597670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.266 qpair failed and we were unable to recover it. 00:30:39.266 [2024-04-17 06:56:43.597828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.597985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.598011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.266 qpair failed and we were unable to recover it. 00:30:39.266 [2024-04-17 06:56:43.598170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.598306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.598331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.266 qpair failed and we were unable to recover it. 00:30:39.266 [2024-04-17 06:56:43.598487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.598654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.598679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.266 qpair failed and we were unable to recover it. 00:30:39.266 [2024-04-17 06:56:43.598816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.598944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.598969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.266 qpair failed and we were unable to recover it. 00:30:39.266 [2024-04-17 06:56:43.599208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.599336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.599360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.266 qpair failed and we were unable to recover it. 00:30:39.266 [2024-04-17 06:56:43.599497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.599673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.599698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.266 qpair failed and we were unable to recover it. 00:30:39.266 [2024-04-17 06:56:43.599837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.266 [2024-04-17 06:56:43.599995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.600021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.267 qpair failed and we were unable to recover it. 00:30:39.267 [2024-04-17 06:56:43.600186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.600321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.600347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.267 qpair failed and we were unable to recover it. 00:30:39.267 [2024-04-17 06:56:43.600469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.600609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.600633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.267 qpair failed and we were unable to recover it. 00:30:39.267 [2024-04-17 06:56:43.600798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.600974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.600999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.267 qpair failed and we were unable to recover it. 00:30:39.267 [2024-04-17 06:56:43.601253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.601407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.601432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.267 qpair failed and we were unable to recover it. 00:30:39.267 [2024-04-17 06:56:43.601578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.601713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.601740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.267 qpair failed and we were unable to recover it. 00:30:39.267 [2024-04-17 06:56:43.601900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.602136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.602161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.267 qpair failed and we were unable to recover it. 00:30:39.267 [2024-04-17 06:56:43.602321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.602487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.602512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.267 qpair failed and we were unable to recover it. 00:30:39.267 [2024-04-17 06:56:43.602639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.602792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.602816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.267 qpair failed and we were unable to recover it. 00:30:39.267 [2024-04-17 06:56:43.602943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.603095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.603119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.267 qpair failed and we were unable to recover it. 00:30:39.267 [2024-04-17 06:56:43.603247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.603373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.603399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.267 qpair failed and we were unable to recover it. 00:30:39.267 [2024-04-17 06:56:43.603530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.603690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.603715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.267 qpair failed and we were unable to recover it. 00:30:39.267 [2024-04-17 06:56:43.603893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.604045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.604069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.267 qpair failed and we were unable to recover it. 00:30:39.267 [2024-04-17 06:56:43.604198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.604368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.604393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.267 qpair failed and we were unable to recover it. 00:30:39.267 [2024-04-17 06:56:43.604578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.604701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.604728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.267 qpair failed and we were unable to recover it. 00:30:39.267 [2024-04-17 06:56:43.604852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.604978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.605003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.267 qpair failed and we were unable to recover it. 00:30:39.267 [2024-04-17 06:56:43.605164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.605319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.605344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.267 qpair failed and we were unable to recover it. 00:30:39.267 [2024-04-17 06:56:43.605468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.605637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.605662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.267 qpair failed and we were unable to recover it. 00:30:39.267 [2024-04-17 06:56:43.605812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.605999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.606024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.267 qpair failed and we were unable to recover it. 00:30:39.267 [2024-04-17 06:56:43.606144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.606321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.606346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.267 qpair failed and we were unable to recover it. 00:30:39.267 [2024-04-17 06:56:43.606493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.606647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.606671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.267 qpair failed and we were unable to recover it. 00:30:39.267 [2024-04-17 06:56:43.606795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.606949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.606974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.267 qpair failed and we were unable to recover it. 00:30:39.267 [2024-04-17 06:56:43.607129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.607282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.607306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.267 qpair failed and we were unable to recover it. 00:30:39.267 [2024-04-17 06:56:43.607494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.607649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.607673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.267 qpair failed and we were unable to recover it. 00:30:39.267 [2024-04-17 06:56:43.607829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.607963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.607989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.267 qpair failed and we were unable to recover it. 00:30:39.267 [2024-04-17 06:56:43.608118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.608282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.608308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.267 qpair failed and we were unable to recover it. 00:30:39.267 [2024-04-17 06:56:43.608434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.608592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.608617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.267 qpair failed and we were unable to recover it. 00:30:39.267 [2024-04-17 06:56:43.608739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.608875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.608900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.267 qpair failed and we were unable to recover it. 00:30:39.267 [2024-04-17 06:56:43.609026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.609147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.267 [2024-04-17 06:56:43.609171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.267 qpair failed and we were unable to recover it. 00:30:39.267 [2024-04-17 06:56:43.609307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.609454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.609478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.268 qpair failed and we were unable to recover it. 00:30:39.268 [2024-04-17 06:56:43.609604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.609728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.609752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.268 qpair failed and we were unable to recover it. 00:30:39.268 [2024-04-17 06:56:43.609889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.610014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.610038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.268 qpair failed and we were unable to recover it. 00:30:39.268 [2024-04-17 06:56:43.610170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.610326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.610350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.268 qpair failed and we were unable to recover it. 00:30:39.268 [2024-04-17 06:56:43.610469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.610605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.610630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.268 qpair failed and we were unable to recover it. 00:30:39.268 [2024-04-17 06:56:43.610769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.610911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.610934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.268 qpair failed and we were unable to recover it. 00:30:39.268 [2024-04-17 06:56:43.611104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.611239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.611265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.268 qpair failed and we were unable to recover it. 00:30:39.268 [2024-04-17 06:56:43.611414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.611542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.611567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.268 qpair failed and we were unable to recover it. 00:30:39.268 [2024-04-17 06:56:43.611717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.611884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.611908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.268 qpair failed and we were unable to recover it. 00:30:39.268 [2024-04-17 06:56:43.612043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.612181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.612207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.268 qpair failed and we were unable to recover it. 00:30:39.268 [2024-04-17 06:56:43.612340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.612467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.612492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.268 qpair failed and we were unable to recover it. 00:30:39.268 [2024-04-17 06:56:43.612622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.612751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.612776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.268 qpair failed and we were unable to recover it. 00:30:39.268 [2024-04-17 06:56:43.612909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.613061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.613087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.268 qpair failed and we were unable to recover it. 00:30:39.268 [2024-04-17 06:56:43.613252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.613406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.613430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.268 qpair failed and we were unable to recover it. 00:30:39.268 [2024-04-17 06:56:43.613558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.613683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.613708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.268 qpair failed and we were unable to recover it. 00:30:39.268 [2024-04-17 06:56:43.613837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.613994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.614018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.268 qpair failed and we were unable to recover it. 00:30:39.268 [2024-04-17 06:56:43.614146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.614280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.614306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.268 qpair failed and we were unable to recover it. 00:30:39.268 [2024-04-17 06:56:43.614460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.614604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.614629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.268 qpair failed and we were unable to recover it. 00:30:39.268 [2024-04-17 06:56:43.614755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.614912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.614940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.268 qpair failed and we were unable to recover it. 00:30:39.268 [2024-04-17 06:56:43.615101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.615244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.615270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.268 qpair failed and we were unable to recover it. 00:30:39.268 [2024-04-17 06:56:43.615409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.615571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.615597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.268 qpair failed and we were unable to recover it. 00:30:39.268 [2024-04-17 06:56:43.615726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.615878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.615903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.268 qpair failed and we were unable to recover it. 00:30:39.268 [2024-04-17 06:56:43.616037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.616161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.616193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.268 qpair failed and we were unable to recover it. 00:30:39.268 [2024-04-17 06:56:43.616324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.616460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.616485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.268 qpair failed and we were unable to recover it. 00:30:39.268 [2024-04-17 06:56:43.616616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.268 [2024-04-17 06:56:43.616742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.616767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.269 qpair failed and we were unable to recover it. 00:30:39.269 [2024-04-17 06:56:43.616960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.617078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.617102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.269 qpair failed and we were unable to recover it. 00:30:39.269 [2024-04-17 06:56:43.617248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.617377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.617402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.269 qpair failed and we were unable to recover it. 00:30:39.269 [2024-04-17 06:56:43.617546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.617692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.617717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.269 qpair failed and we were unable to recover it. 00:30:39.269 [2024-04-17 06:56:43.617872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.618033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.618062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.269 qpair failed and we were unable to recover it. 00:30:39.269 [2024-04-17 06:56:43.618196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.618357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.618382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.269 qpair failed and we were unable to recover it. 00:30:39.269 [2024-04-17 06:56:43.618534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.618663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.618688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.269 qpair failed and we were unable to recover it. 00:30:39.269 [2024-04-17 06:56:43.618819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.618951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.618976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.269 qpair failed and we were unable to recover it. 00:30:39.269 [2024-04-17 06:56:43.619102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.619241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.619268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.269 qpair failed and we were unable to recover it. 00:30:39.269 [2024-04-17 06:56:43.619404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.619568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.619593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.269 qpair failed and we were unable to recover it. 00:30:39.269 [2024-04-17 06:56:43.619739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.619890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.619915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.269 qpair failed and we were unable to recover it. 00:30:39.269 [2024-04-17 06:56:43.620048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.620190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.620216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.269 qpair failed and we were unable to recover it. 00:30:39.269 [2024-04-17 06:56:43.620350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.620485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.620511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.269 qpair failed and we were unable to recover it. 00:30:39.269 [2024-04-17 06:56:43.620642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.620780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.620805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.269 qpair failed and we were unable to recover it. 00:30:39.269 [2024-04-17 06:56:43.620962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.621094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.621123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.269 qpair failed and we were unable to recover it. 00:30:39.269 [2024-04-17 06:56:43.621271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.621403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.621427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.269 qpair failed and we were unable to recover it. 00:30:39.269 [2024-04-17 06:56:43.621554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.621737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.621762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.269 qpair failed and we were unable to recover it. 00:30:39.269 [2024-04-17 06:56:43.621884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.622012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.622038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.269 qpair failed and we were unable to recover it. 00:30:39.269 [2024-04-17 06:56:43.622173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.622312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.622338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.269 qpair failed and we were unable to recover it. 00:30:39.269 [2024-04-17 06:56:43.622493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.622646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.622672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.269 qpair failed and we were unable to recover it. 00:30:39.269 [2024-04-17 06:56:43.622905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.623062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.623086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.269 qpair failed and we were unable to recover it. 00:30:39.269 [2024-04-17 06:56:43.623254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.623415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.623439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.269 qpair failed and we were unable to recover it. 00:30:39.269 [2024-04-17 06:56:43.623655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.623784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.623809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.269 qpair failed and we were unable to recover it. 00:30:39.269 [2024-04-17 06:56:43.624054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.624185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.624212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.269 qpair failed and we were unable to recover it. 00:30:39.269 [2024-04-17 06:56:43.624368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.624506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.624535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.269 qpair failed and we were unable to recover it. 00:30:39.269 [2024-04-17 06:56:43.624664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.624800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.624824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.269 qpair failed and we were unable to recover it. 00:30:39.269 [2024-04-17 06:56:43.624952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.625084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.625109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.269 qpair failed and we were unable to recover it. 00:30:39.269 [2024-04-17 06:56:43.625253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.625385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.625410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.269 qpair failed and we were unable to recover it. 00:30:39.269 [2024-04-17 06:56:43.625574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.625703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.269 [2024-04-17 06:56:43.625730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.269 qpair failed and we were unable to recover it. 00:30:39.270 [2024-04-17 06:56:43.625885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.626069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.626093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.270 qpair failed and we were unable to recover it. 00:30:39.270 [2024-04-17 06:56:43.626269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.626423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.626448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.270 qpair failed and we were unable to recover it. 00:30:39.270 [2024-04-17 06:56:43.626578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.626709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.626734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.270 qpair failed and we were unable to recover it. 00:30:39.270 [2024-04-17 06:56:43.626870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.627032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.627058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.270 qpair failed and we were unable to recover it. 00:30:39.270 [2024-04-17 06:56:43.627238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.627393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.627418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.270 qpair failed and we were unable to recover it. 00:30:39.270 [2024-04-17 06:56:43.627554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.627736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.627761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.270 qpair failed and we were unable to recover it. 00:30:39.270 [2024-04-17 06:56:43.627916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.628047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.628073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.270 qpair failed and we were unable to recover it. 00:30:39.270 [2024-04-17 06:56:43.628259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.628420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.628444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.270 qpair failed and we were unable to recover it. 00:30:39.270 [2024-04-17 06:56:43.628604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.628731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.628755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.270 qpair failed and we were unable to recover it. 00:30:39.270 [2024-04-17 06:56:43.628918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.629123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.629149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.270 qpair failed and we were unable to recover it. 00:30:39.270 [2024-04-17 06:56:43.629305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.629440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.629466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.270 qpair failed and we were unable to recover it. 00:30:39.270 [2024-04-17 06:56:43.629595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.629727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.629753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.270 qpair failed and we were unable to recover it. 00:30:39.270 [2024-04-17 06:56:43.629886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.630037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.630062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.270 qpair failed and we were unable to recover it. 00:30:39.270 [2024-04-17 06:56:43.630210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.630342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.630367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.270 qpair failed and we were unable to recover it. 00:30:39.270 [2024-04-17 06:56:43.630538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.630698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.630722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.270 qpair failed and we were unable to recover it. 00:30:39.270 [2024-04-17 06:56:43.630900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.631052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.631077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.270 qpair failed and we were unable to recover it. 00:30:39.270 [2024-04-17 06:56:43.631207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.631335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.631360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.270 qpair failed and we were unable to recover it. 00:30:39.270 [2024-04-17 06:56:43.631488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.631608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.631633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.270 qpair failed and we were unable to recover it. 00:30:39.270 [2024-04-17 06:56:43.631782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.631962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.631987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.270 qpair failed and we were unable to recover it. 00:30:39.270 [2024-04-17 06:56:43.632158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.632293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.632317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.270 qpair failed and we were unable to recover it. 00:30:39.270 [2024-04-17 06:56:43.632444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.632575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.632600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.270 qpair failed and we were unable to recover it. 00:30:39.270 [2024-04-17 06:56:43.632750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.632903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.632927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.270 qpair failed and we were unable to recover it. 00:30:39.270 [2024-04-17 06:56:43.633077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.633224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.633249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.270 qpair failed and we were unable to recover it. 00:30:39.270 [2024-04-17 06:56:43.633401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.633560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.633587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.270 qpair failed and we were unable to recover it. 00:30:39.270 [2024-04-17 06:56:43.633721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.633883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.633909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.270 qpair failed and we were unable to recover it. 00:30:39.270 [2024-04-17 06:56:43.634057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.634207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.634232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.270 qpair failed and we were unable to recover it. 00:30:39.270 [2024-04-17 06:56:43.634390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.634584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.634608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.270 qpair failed and we were unable to recover it. 00:30:39.270 [2024-04-17 06:56:43.634728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.634850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.634875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.270 qpair failed and we were unable to recover it. 00:30:39.270 [2024-04-17 06:56:43.635005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.270 [2024-04-17 06:56:43.635165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.635196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.271 qpair failed and we were unable to recover it. 00:30:39.271 [2024-04-17 06:56:43.635332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.635468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.635493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.271 qpair failed and we were unable to recover it. 00:30:39.271 [2024-04-17 06:56:43.635644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.635821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.635846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.271 qpair failed and we were unable to recover it. 00:30:39.271 [2024-04-17 06:56:43.635999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.636128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.636152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.271 qpair failed and we were unable to recover it. 00:30:39.271 [2024-04-17 06:56:43.636316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.636487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.636512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.271 qpair failed and we were unable to recover it. 00:30:39.271 [2024-04-17 06:56:43.636669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.636827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.636851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.271 qpair failed and we were unable to recover it. 00:30:39.271 [2024-04-17 06:56:43.636980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.637164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.637217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.271 qpair failed and we were unable to recover it. 00:30:39.271 [2024-04-17 06:56:43.637374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.637500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.637524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.271 qpair failed and we were unable to recover it. 00:30:39.271 [2024-04-17 06:56:43.637649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.637819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.637844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.271 qpair failed and we were unable to recover it. 00:30:39.271 [2024-04-17 06:56:43.637999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.638138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.638162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.271 qpair failed and we were unable to recover it. 00:30:39.271 [2024-04-17 06:56:43.638297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.638426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.638450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.271 qpair failed and we were unable to recover it. 00:30:39.271 [2024-04-17 06:56:43.638634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.638783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.638808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.271 qpair failed and we were unable to recover it. 00:30:39.271 [2024-04-17 06:56:43.638934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.639070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.639094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.271 qpair failed and we were unable to recover it. 00:30:39.271 [2024-04-17 06:56:43.639223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.639355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.639380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.271 qpair failed and we were unable to recover it. 00:30:39.271 [2024-04-17 06:56:43.639548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.639705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.639730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.271 qpair failed and we were unable to recover it. 00:30:39.271 [2024-04-17 06:56:43.639881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.640027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.640052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.271 qpair failed and we were unable to recover it. 00:30:39.271 [2024-04-17 06:56:43.640185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.640348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.640373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.271 qpair failed and we were unable to recover it. 00:30:39.271 [2024-04-17 06:56:43.640501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.640659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.640684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.271 qpair failed and we were unable to recover it. 00:30:39.271 [2024-04-17 06:56:43.640835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.640962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.640986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.271 qpair failed and we were unable to recover it. 00:30:39.271 [2024-04-17 06:56:43.641115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.641254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.641281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.271 qpair failed and we were unable to recover it. 00:30:39.271 [2024-04-17 06:56:43.641435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.641566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.641591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.271 qpair failed and we were unable to recover it. 00:30:39.271 [2024-04-17 06:56:43.641755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.641913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.641938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.271 qpair failed and we were unable to recover it. 00:30:39.271 [2024-04-17 06:56:43.642098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.642251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.642277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.271 qpair failed and we were unable to recover it. 00:30:39.271 [2024-04-17 06:56:43.642406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.642593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.642618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.271 qpair failed and we were unable to recover it. 00:30:39.271 [2024-04-17 06:56:43.642750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.642885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.642909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.271 qpair failed and we were unable to recover it. 00:30:39.271 [2024-04-17 06:56:43.643038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.643162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.643194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.271 qpair failed and we were unable to recover it. 00:30:39.271 [2024-04-17 06:56:43.643349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.643498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.643523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.271 qpair failed and we were unable to recover it. 00:30:39.271 [2024-04-17 06:56:43.643680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.643833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.643858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.271 qpair failed and we were unable to recover it. 00:30:39.271 [2024-04-17 06:56:43.644013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.644143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.271 [2024-04-17 06:56:43.644168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.271 qpair failed and we were unable to recover it. 00:30:39.271 [2024-04-17 06:56:43.644299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.644481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.644506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.272 qpair failed and we were unable to recover it. 00:30:39.272 [2024-04-17 06:56:43.644655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.644814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.644839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.272 qpair failed and we were unable to recover it. 00:30:39.272 [2024-04-17 06:56:43.644968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.645102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.645127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.272 qpair failed and we were unable to recover it. 00:30:39.272 [2024-04-17 06:56:43.645290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.645424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.645449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.272 qpair failed and we were unable to recover it. 00:30:39.272 [2024-04-17 06:56:43.645591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.645739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.645765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.272 qpair failed and we were unable to recover it. 00:30:39.272 [2024-04-17 06:56:43.645916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.646035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.646061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.272 qpair failed and we were unable to recover it. 00:30:39.272 [2024-04-17 06:56:43.646232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.646357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.646382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.272 qpair failed and we were unable to recover it. 00:30:39.272 [2024-04-17 06:56:43.646507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.646635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.646660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.272 qpair failed and we were unable to recover it. 00:30:39.272 [2024-04-17 06:56:43.646785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.646922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.646947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.272 qpair failed and we were unable to recover it. 00:30:39.272 [2024-04-17 06:56:43.647075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.647241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.647268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.272 qpair failed and we were unable to recover it. 00:30:39.272 [2024-04-17 06:56:43.647420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.647575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.647600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.272 qpair failed and we were unable to recover it. 00:30:39.272 [2024-04-17 06:56:43.647728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.647907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.647933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.272 qpair failed and we were unable to recover it. 00:30:39.272 [2024-04-17 06:56:43.648055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.648185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.648211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.272 qpair failed and we were unable to recover it. 00:30:39.272 [2024-04-17 06:56:43.648386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.648505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.648530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.272 qpair failed and we were unable to recover it. 00:30:39.272 [2024-04-17 06:56:43.648683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.648822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.648846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.272 qpair failed and we were unable to recover it. 00:30:39.272 [2024-04-17 06:56:43.648995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.649132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.649158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.272 qpair failed and we were unable to recover it. 00:30:39.272 [2024-04-17 06:56:43.649308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.649440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.649465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.272 qpair failed and we were unable to recover it. 00:30:39.272 [2024-04-17 06:56:43.649596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.649774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.649798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.272 qpair failed and we were unable to recover it. 00:30:39.272 [2024-04-17 06:56:43.649946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.650083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.650108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.272 qpair failed and we were unable to recover it. 00:30:39.272 [2024-04-17 06:56:43.650250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.650381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.650407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.272 qpair failed and we were unable to recover it. 00:30:39.272 [2024-04-17 06:56:43.650579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.650700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.650727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.272 qpair failed and we were unable to recover it. 00:30:39.272 [2024-04-17 06:56:43.650854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.651009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.651034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.272 qpair failed and we were unable to recover it. 00:30:39.272 [2024-04-17 06:56:43.651166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.651343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.651367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.272 qpair failed and we were unable to recover it. 00:30:39.272 [2024-04-17 06:56:43.651522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.651678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.651703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.272 qpair failed and we were unable to recover it. 00:30:39.272 [2024-04-17 06:56:43.651823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.651954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.651979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.272 qpair failed and we were unable to recover it. 00:30:39.272 [2024-04-17 06:56:43.652125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.652245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.652272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.272 qpair failed and we were unable to recover it. 00:30:39.272 [2024-04-17 06:56:43.652398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.652518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.652542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.272 qpair failed and we were unable to recover it. 00:30:39.272 [2024-04-17 06:56:43.652697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.652824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.652848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.272 qpair failed and we were unable to recover it. 00:30:39.272 [2024-04-17 06:56:43.653000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.653121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.272 [2024-04-17 06:56:43.653147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.272 qpair failed and we were unable to recover it. 00:30:39.273 [2024-04-17 06:56:43.653312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.653448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.653472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.273 qpair failed and we were unable to recover it. 00:30:39.273 [2024-04-17 06:56:43.653627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.653784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.653809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.273 qpair failed and we were unable to recover it. 00:30:39.273 [2024-04-17 06:56:43.653938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.654093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.654117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.273 qpair failed and we were unable to recover it. 00:30:39.273 [2024-04-17 06:56:43.654262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.654447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.654472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.273 qpair failed and we were unable to recover it. 00:30:39.273 [2024-04-17 06:56:43.654627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.654779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.654820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.273 qpair failed and we were unable to recover it. 00:30:39.273 [2024-04-17 06:56:43.654982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.655118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.655143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.273 qpair failed and we were unable to recover it. 00:30:39.273 [2024-04-17 06:56:43.655280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.655413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.655438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.273 qpair failed and we were unable to recover it. 00:30:39.273 [2024-04-17 06:56:43.655595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.655726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.655750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.273 qpair failed and we were unable to recover it. 00:30:39.273 [2024-04-17 06:56:43.655903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.656090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.656114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.273 qpair failed and we were unable to recover it. 00:30:39.273 [2024-04-17 06:56:43.656247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.656383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.656407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.273 qpair failed and we were unable to recover it. 00:30:39.273 [2024-04-17 06:56:43.656565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.656748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.656773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.273 qpair failed and we were unable to recover it. 00:30:39.273 [2024-04-17 06:56:43.656924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.657067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.657091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.273 qpair failed and we were unable to recover it. 00:30:39.273 [2024-04-17 06:56:43.657222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.657349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.657375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.273 qpair failed and we were unable to recover it. 00:30:39.273 [2024-04-17 06:56:43.657508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.657663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.657687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.273 qpair failed and we were unable to recover it. 00:30:39.273 [2024-04-17 06:56:43.657817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.657972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.657998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.273 qpair failed and we were unable to recover it. 00:30:39.273 [2024-04-17 06:56:43.658153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.658323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.658348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.273 qpair failed and we were unable to recover it. 00:30:39.273 [2024-04-17 06:56:43.658503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.658658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.658683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.273 qpair failed and we were unable to recover it. 00:30:39.273 [2024-04-17 06:56:43.658836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.658993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.659017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.273 qpair failed and we were unable to recover it. 00:30:39.273 [2024-04-17 06:56:43.659140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.659296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.659321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.273 qpair failed and we were unable to recover it. 00:30:39.273 [2024-04-17 06:56:43.659454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.659585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.659610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.273 qpair failed and we were unable to recover it. 00:30:39.273 [2024-04-17 06:56:43.659740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.659892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.659916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.273 qpair failed and we were unable to recover it. 00:30:39.273 [2024-04-17 06:56:43.660073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.660208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.660235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.273 qpair failed and we were unable to recover it. 00:30:39.273 [2024-04-17 06:56:43.660391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.660519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.660545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.273 qpair failed and we were unable to recover it. 00:30:39.273 [2024-04-17 06:56:43.660705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.660857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.273 [2024-04-17 06:56:43.660881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.273 qpair failed and we were unable to recover it. 00:30:39.273 [2024-04-17 06:56:43.661038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.661171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.661203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.274 qpair failed and we were unable to recover it. 00:30:39.274 [2024-04-17 06:56:43.661368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.661494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.661520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.274 qpair failed and we were unable to recover it. 00:30:39.274 [2024-04-17 06:56:43.661666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.661796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.661820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.274 qpair failed and we were unable to recover it. 00:30:39.274 [2024-04-17 06:56:43.661956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.662087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.662112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.274 qpair failed and we were unable to recover it. 00:30:39.274 [2024-04-17 06:56:43.662265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.662400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.662424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.274 qpair failed and we were unable to recover it. 00:30:39.274 [2024-04-17 06:56:43.662554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.662682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.662708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.274 qpair failed and we were unable to recover it. 00:30:39.274 [2024-04-17 06:56:43.662862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.663019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.663047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.274 qpair failed and we were unable to recover it. 00:30:39.274 [2024-04-17 06:56:43.663209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.663344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.663369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.274 qpair failed and we were unable to recover it. 00:30:39.274 [2024-04-17 06:56:43.663529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.663690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.663715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.274 qpair failed and we were unable to recover it. 00:30:39.274 [2024-04-17 06:56:43.663874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.664032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.664058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.274 qpair failed and we were unable to recover it. 00:30:39.274 [2024-04-17 06:56:43.664189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.664352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.664376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.274 qpair failed and we were unable to recover it. 00:30:39.274 [2024-04-17 06:56:43.664536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.664690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.664716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.274 qpair failed and we were unable to recover it. 00:30:39.274 [2024-04-17 06:56:43.664852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.664983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.665008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.274 qpair failed and we were unable to recover it. 00:30:39.274 [2024-04-17 06:56:43.665159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.665287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.665313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.274 qpair failed and we were unable to recover it. 00:30:39.274 [2024-04-17 06:56:43.665432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.665618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.665644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.274 qpair failed and we were unable to recover it. 00:30:39.274 [2024-04-17 06:56:43.665770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.665926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.665951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.274 qpair failed and we were unable to recover it. 00:30:39.274 [2024-04-17 06:56:43.666102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.666225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.666254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.274 qpair failed and we were unable to recover it. 00:30:39.274 [2024-04-17 06:56:43.666380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.666504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.666529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.274 qpair failed and we were unable to recover it. 00:30:39.274 [2024-04-17 06:56:43.666663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.666793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.666817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.274 qpair failed and we were unable to recover it. 00:30:39.274 [2024-04-17 06:56:43.666970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.667089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.667113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.274 qpair failed and we were unable to recover it. 00:30:39.274 [2024-04-17 06:56:43.667276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.667418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.667442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.274 qpair failed and we were unable to recover it. 00:30:39.274 [2024-04-17 06:56:43.667590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.667735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.667759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.274 qpair failed and we were unable to recover it. 00:30:39.274 [2024-04-17 06:56:43.667907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.668088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.668113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.274 qpair failed and we were unable to recover it. 00:30:39.274 [2024-04-17 06:56:43.668274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.668426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.668451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.274 qpair failed and we were unable to recover it. 00:30:39.274 [2024-04-17 06:56:43.668616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.668763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.668787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.274 qpair failed and we were unable to recover it. 00:30:39.274 [2024-04-17 06:56:43.668969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.669147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.669172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.274 qpair failed and we were unable to recover it. 00:30:39.274 [2024-04-17 06:56:43.669321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.669478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.669508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.274 qpair failed and we were unable to recover it. 00:30:39.274 [2024-04-17 06:56:43.669639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.669765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.669789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.274 qpair failed and we were unable to recover it. 00:30:39.274 [2024-04-17 06:56:43.669945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.670073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.274 [2024-04-17 06:56:43.670099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.275 qpair failed and we were unable to recover it. 00:30:39.275 [2024-04-17 06:56:43.670259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.670398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.670423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.275 qpair failed and we were unable to recover it. 00:30:39.275 [2024-04-17 06:56:43.670580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.670731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.670756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.275 qpair failed and we were unable to recover it. 00:30:39.275 [2024-04-17 06:56:43.670877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.671001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.671027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.275 qpair failed and we were unable to recover it. 00:30:39.275 [2024-04-17 06:56:43.671149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.671318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.671343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.275 qpair failed and we were unable to recover it. 00:30:39.275 [2024-04-17 06:56:43.671496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.671651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.671676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.275 qpair failed and we were unable to recover it. 00:30:39.275 [2024-04-17 06:56:43.671846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.671967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.671991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.275 qpair failed and we were unable to recover it. 00:30:39.275 [2024-04-17 06:56:43.672123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.672262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.672288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.275 qpair failed and we were unable to recover it. 00:30:39.275 [2024-04-17 06:56:43.672411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.672566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.672594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.275 qpair failed and we were unable to recover it. 00:30:39.275 [2024-04-17 06:56:43.672762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.672895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.672922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.275 qpair failed and we were unable to recover it. 00:30:39.275 [2024-04-17 06:56:43.673070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.673230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.673256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.275 qpair failed and we were unable to recover it. 00:30:39.275 [2024-04-17 06:56:43.673390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.673576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.673601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.275 qpair failed and we were unable to recover it. 00:30:39.275 [2024-04-17 06:56:43.673786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.673908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.673933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.275 qpair failed and we were unable to recover it. 00:30:39.275 [2024-04-17 06:56:43.674081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.674268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.674294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.275 qpair failed and we were unable to recover it. 00:30:39.275 [2024-04-17 06:56:43.674454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.674586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.674610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.275 qpair failed and we were unable to recover it. 00:30:39.275 [2024-04-17 06:56:43.674734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.674890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.674914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.275 qpair failed and we were unable to recover it. 00:30:39.275 [2024-04-17 06:56:43.675035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.675218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.675244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.275 qpair failed and we were unable to recover it. 00:30:39.275 [2024-04-17 06:56:43.675367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.675605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.675630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.275 qpair failed and we were unable to recover it. 00:30:39.275 [2024-04-17 06:56:43.675785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.675932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.675957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.275 qpair failed and we were unable to recover it. 00:30:39.275 [2024-04-17 06:56:43.676119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.676277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.676302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.275 qpair failed and we were unable to recover it. 00:30:39.275 [2024-04-17 06:56:43.676460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.676643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.676668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.275 qpair failed and we were unable to recover it. 00:30:39.275 [2024-04-17 06:56:43.676819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.676999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.677023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.275 qpair failed and we were unable to recover it. 00:30:39.275 [2024-04-17 06:56:43.677144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.677280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.677307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.275 qpair failed and we were unable to recover it. 00:30:39.275 [2024-04-17 06:56:43.677483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.677636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.677661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.275 qpair failed and we were unable to recover it. 00:30:39.275 [2024-04-17 06:56:43.677844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.678025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.678050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.275 qpair failed and we were unable to recover it. 00:30:39.275 [2024-04-17 06:56:43.678196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.678322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.678347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.275 qpair failed and we were unable to recover it. 00:30:39.275 [2024-04-17 06:56:43.678502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.678693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.678718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.275 qpair failed and we were unable to recover it. 00:30:39.275 [2024-04-17 06:56:43.678904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.679062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.679088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.275 qpair failed and we were unable to recover it. 00:30:39.275 [2024-04-17 06:56:43.679255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.679410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.679436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.275 qpair failed and we were unable to recover it. 00:30:39.275 [2024-04-17 06:56:43.679600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.275 [2024-04-17 06:56:43.679780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.679805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.276 qpair failed and we were unable to recover it. 00:30:39.276 [2024-04-17 06:56:43.679957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.680114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.680139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.276 qpair failed and we were unable to recover it. 00:30:39.276 [2024-04-17 06:56:43.680281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.680463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.680488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.276 qpair failed and we were unable to recover it. 00:30:39.276 [2024-04-17 06:56:43.680647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.680806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.680831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.276 qpair failed and we were unable to recover it. 00:30:39.276 [2024-04-17 06:56:43.680987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.681115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.681139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.276 qpair failed and we were unable to recover it. 00:30:39.276 [2024-04-17 06:56:43.681275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.681424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.681448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.276 qpair failed and we were unable to recover it. 00:30:39.276 [2024-04-17 06:56:43.681604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.681728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.681753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.276 qpair failed and we were unable to recover it. 00:30:39.276 [2024-04-17 06:56:43.681872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.682021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.682046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.276 qpair failed and we were unable to recover it. 00:30:39.276 [2024-04-17 06:56:43.682170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.682300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.682325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.276 qpair failed and we were unable to recover it. 00:30:39.276 [2024-04-17 06:56:43.682473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.682620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.682643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.276 qpair failed and we were unable to recover it. 00:30:39.276 [2024-04-17 06:56:43.682788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.682940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.682965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.276 qpair failed and we were unable to recover it. 00:30:39.276 [2024-04-17 06:56:43.683123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.683270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.683296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.276 qpair failed and we were unable to recover it. 00:30:39.276 [2024-04-17 06:56:43.683480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.683643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.683668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.276 qpair failed and we were unable to recover it. 00:30:39.276 [2024-04-17 06:56:43.683851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.684029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.684053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.276 qpair failed and we were unable to recover it. 00:30:39.276 [2024-04-17 06:56:43.684234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.684371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.684396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.276 qpair failed and we were unable to recover it. 00:30:39.276 [2024-04-17 06:56:43.684546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.684674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.684698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.276 qpair failed and we were unable to recover it. 00:30:39.276 [2024-04-17 06:56:43.684824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.684948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.684972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.276 qpair failed and we were unable to recover it. 00:30:39.276 [2024-04-17 06:56:43.685139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.685277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.685302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.276 qpair failed and we were unable to recover it. 00:30:39.276 [2024-04-17 06:56:43.685469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.685598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.685622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.276 qpair failed and we were unable to recover it. 00:30:39.276 [2024-04-17 06:56:43.685743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.685899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.685923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.276 qpair failed and we were unable to recover it. 00:30:39.276 [2024-04-17 06:56:43.686057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.686218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.686243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.276 qpair failed and we were unable to recover it. 00:30:39.276 [2024-04-17 06:56:43.686397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.686556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.686580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.276 qpair failed and we were unable to recover it. 00:30:39.276 [2024-04-17 06:56:43.686757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.686903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.686927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.276 qpair failed and we were unable to recover it. 00:30:39.276 [2024-04-17 06:56:43.687045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.687208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.687233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.276 qpair failed and we were unable to recover it. 00:30:39.276 [2024-04-17 06:56:43.687420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.687547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.687572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.276 qpair failed and we were unable to recover it. 00:30:39.276 [2024-04-17 06:56:43.687696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.687846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.687870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.276 qpair failed and we were unable to recover it. 00:30:39.276 [2024-04-17 06:56:43.688026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.688167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.688198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.276 qpair failed and we were unable to recover it. 00:30:39.276 [2024-04-17 06:56:43.688330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.688501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.688525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.276 qpair failed and we were unable to recover it. 00:30:39.276 [2024-04-17 06:56:43.688673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.688813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.276 [2024-04-17 06:56:43.688838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.276 qpair failed and we were unable to recover it. 00:30:39.276 [2024-04-17 06:56:43.688968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.689148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.689172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.277 qpair failed and we were unable to recover it. 00:30:39.277 [2024-04-17 06:56:43.689309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.689434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.689458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.277 qpair failed and we were unable to recover it. 00:30:39.277 [2024-04-17 06:56:43.689606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.689754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.689778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.277 qpair failed and we were unable to recover it. 00:30:39.277 [2024-04-17 06:56:43.689958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.690106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.690129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.277 qpair failed and we were unable to recover it. 00:30:39.277 [2024-04-17 06:56:43.690283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.690403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.690428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.277 qpair failed and we were unable to recover it. 00:30:39.277 [2024-04-17 06:56:43.690580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.690729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.690753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.277 qpair failed and we were unable to recover it. 00:30:39.277 [2024-04-17 06:56:43.690885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.691054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.691078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.277 qpair failed and we were unable to recover it. 00:30:39.277 [2024-04-17 06:56:43.691228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.691386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.691411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.277 qpair failed and we were unable to recover it. 00:30:39.277 [2024-04-17 06:56:43.691558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.691686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.691710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.277 qpair failed and we were unable to recover it. 00:30:39.277 [2024-04-17 06:56:43.691883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.692048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.692073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.277 qpair failed and we were unable to recover it. 00:30:39.277 [2024-04-17 06:56:43.692230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.692363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.692388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.277 qpair failed and we were unable to recover it. 00:30:39.277 [2024-04-17 06:56:43.692563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.692691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.692716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.277 qpair failed and we were unable to recover it. 00:30:39.277 [2024-04-17 06:56:43.692840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.692997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.693022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.277 qpair failed and we were unable to recover it. 00:30:39.277 [2024-04-17 06:56:43.693173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.693373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.693399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.277 qpair failed and we were unable to recover it. 00:30:39.277 [2024-04-17 06:56:43.693560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.693713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.693737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.277 qpair failed and we were unable to recover it. 00:30:39.277 [2024-04-17 06:56:43.693884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.694036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.694060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.277 qpair failed and we were unable to recover it. 00:30:39.277 [2024-04-17 06:56:43.694186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.694397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.694421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.277 qpair failed and we were unable to recover it. 00:30:39.277 [2024-04-17 06:56:43.694570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.694687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.694711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.277 qpair failed and we were unable to recover it. 00:30:39.277 [2024-04-17 06:56:43.694865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.694991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.695015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.277 qpair failed and we were unable to recover it. 00:30:39.277 [2024-04-17 06:56:43.695149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.695304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.695329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.277 qpair failed and we were unable to recover it. 00:30:39.277 [2024-04-17 06:56:43.695483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.695604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.695628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.277 qpair failed and we were unable to recover it. 00:30:39.277 [2024-04-17 06:56:43.695793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.695975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.696000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.277 qpair failed and we were unable to recover it. 00:30:39.277 [2024-04-17 06:56:43.696153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.696305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.696330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.277 qpair failed and we were unable to recover it. 00:30:39.277 [2024-04-17 06:56:43.696486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.696640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.696665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.277 qpair failed and we were unable to recover it. 00:30:39.277 [2024-04-17 06:56:43.696793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.696950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.696974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.277 qpair failed and we were unable to recover it. 00:30:39.277 [2024-04-17 06:56:43.697121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.697278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.697304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.277 qpair failed and we were unable to recover it. 00:30:39.277 [2024-04-17 06:56:43.697455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.697600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.277 [2024-04-17 06:56:43.697625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.278 qpair failed and we were unable to recover it. 00:30:39.278 [2024-04-17 06:56:43.697776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.697927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.697952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.278 qpair failed and we were unable to recover it. 00:30:39.278 [2024-04-17 06:56:43.698103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.698248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.698272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.278 qpair failed and we were unable to recover it. 00:30:39.278 [2024-04-17 06:56:43.698404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.698565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.698589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.278 qpair failed and we were unable to recover it. 00:30:39.278 [2024-04-17 06:56:43.698750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.698870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.698895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.278 qpair failed and we were unable to recover it. 00:30:39.278 [2024-04-17 06:56:43.699080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.699250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.699276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.278 qpair failed and we were unable to recover it. 00:30:39.278 [2024-04-17 06:56:43.699459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.699590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.699614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.278 qpair failed and we were unable to recover it. 00:30:39.278 [2024-04-17 06:56:43.699769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.699919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.699943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.278 qpair failed and we were unable to recover it. 00:30:39.278 [2024-04-17 06:56:43.700070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.700236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.700262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.278 qpair failed and we were unable to recover it. 00:30:39.278 [2024-04-17 06:56:43.700443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.700589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.700614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.278 qpair failed and we were unable to recover it. 00:30:39.278 [2024-04-17 06:56:43.700801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.700957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.700982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.278 qpair failed and we were unable to recover it. 00:30:39.278 [2024-04-17 06:56:43.701106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.701231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.701256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.278 qpair failed and we were unable to recover it. 00:30:39.278 [2024-04-17 06:56:43.701382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.701507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.701533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.278 qpair failed and we were unable to recover it. 00:30:39.278 [2024-04-17 06:56:43.701679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.701803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.701827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.278 qpair failed and we were unable to recover it. 00:30:39.278 [2024-04-17 06:56:43.701984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.702133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.702157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.278 qpair failed and we were unable to recover it. 00:30:39.278 [2024-04-17 06:56:43.702286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.702454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.702479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.278 qpair failed and we were unable to recover it. 00:30:39.278 [2024-04-17 06:56:43.702608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.702760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.702785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.278 qpair failed and we were unable to recover it. 00:30:39.278 [2024-04-17 06:56:43.702932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.703086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.703112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.278 qpair failed and we were unable to recover it. 00:30:39.278 [2024-04-17 06:56:43.703274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.703408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.703433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.278 qpair failed and we were unable to recover it. 00:30:39.278 [2024-04-17 06:56:43.703573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.703706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.703730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.278 qpair failed and we were unable to recover it. 00:30:39.278 [2024-04-17 06:56:43.703886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.704045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.704071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.278 qpair failed and we were unable to recover it. 00:30:39.278 [2024-04-17 06:56:43.704243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.704405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.704429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.278 qpair failed and we were unable to recover it. 00:30:39.278 [2024-04-17 06:56:43.704555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.704723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.704748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.278 qpair failed and we were unable to recover it. 00:30:39.278 [2024-04-17 06:56:43.704904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.705042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.705067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.278 qpair failed and we were unable to recover it. 00:30:39.278 [2024-04-17 06:56:43.705225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.705372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.705396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.278 qpair failed and we were unable to recover it. 00:30:39.278 [2024-04-17 06:56:43.705547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.705708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.705734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.278 qpair failed and we were unable to recover it. 00:30:39.278 [2024-04-17 06:56:43.705888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.706019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.278 [2024-04-17 06:56:43.706045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.278 qpair failed and we were unable to recover it. 00:30:39.278 [2024-04-17 06:56:43.706183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.706309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.706334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.279 qpair failed and we were unable to recover it. 00:30:39.279 [2024-04-17 06:56:43.706495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.706621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.706646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.279 qpair failed and we were unable to recover it. 00:30:39.279 [2024-04-17 06:56:43.706778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.706902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.706928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.279 qpair failed and we were unable to recover it. 00:30:39.279 [2024-04-17 06:56:43.707057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.707186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.707212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.279 qpair failed and we were unable to recover it. 00:30:39.279 [2024-04-17 06:56:43.707346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.707479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.707503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.279 qpair failed and we were unable to recover it. 00:30:39.279 [2024-04-17 06:56:43.707633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.707789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.707814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.279 qpair failed and we were unable to recover it. 00:30:39.279 [2024-04-17 06:56:43.707940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.708103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.708128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.279 qpair failed and we were unable to recover it. 00:30:39.279 [2024-04-17 06:56:43.708259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.708388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.708414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.279 qpair failed and we were unable to recover it. 00:30:39.279 [2024-04-17 06:56:43.708543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.708702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.708727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.279 qpair failed and we were unable to recover it. 00:30:39.279 [2024-04-17 06:56:43.708884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.709044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.709070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.279 qpair failed and we were unable to recover it. 00:30:39.279 [2024-04-17 06:56:43.709223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.709361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.709385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.279 qpair failed and we were unable to recover it. 00:30:39.279 [2024-04-17 06:56:43.709515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.709665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.709689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.279 qpair failed and we were unable to recover it. 00:30:39.279 [2024-04-17 06:56:43.709813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.709971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.709996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.279 qpair failed and we were unable to recover it. 00:30:39.279 [2024-04-17 06:56:43.710157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.710303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.710328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.279 qpair failed and we were unable to recover it. 00:30:39.279 [2024-04-17 06:56:43.710459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.710613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.710637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.279 qpair failed and we were unable to recover it. 00:30:39.279 [2024-04-17 06:56:43.710766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.710891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.710917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.279 qpair failed and we were unable to recover it. 00:30:39.279 [2024-04-17 06:56:43.711049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.711209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.711235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.279 qpair failed and we were unable to recover it. 00:30:39.279 [2024-04-17 06:56:43.711368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.711499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.711525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.279 qpair failed and we were unable to recover it. 00:30:39.279 [2024-04-17 06:56:43.711656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.711814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.711840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.279 qpair failed and we were unable to recover it. 00:30:39.279 [2024-04-17 06:56:43.711997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.712128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.712153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.279 qpair failed and we were unable to recover it. 00:30:39.279 [2024-04-17 06:56:43.712287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.712441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.712467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.279 qpair failed and we were unable to recover it. 00:30:39.279 [2024-04-17 06:56:43.712595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.712750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.712775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.279 qpair failed and we were unable to recover it. 00:30:39.279 [2024-04-17 06:56:43.712899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.713029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.713054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.279 qpair failed and we were unable to recover it. 00:30:39.279 [2024-04-17 06:56:43.713191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.713346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.713370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.279 qpair failed and we were unable to recover it. 00:30:39.279 [2024-04-17 06:56:43.713495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.713628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.713655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.279 qpair failed and we were unable to recover it. 00:30:39.279 [2024-04-17 06:56:43.713792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.713925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.713949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.279 qpair failed and we were unable to recover it. 00:30:39.279 [2024-04-17 06:56:43.714098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.714281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.714306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.279 qpair failed and we were unable to recover it. 00:30:39.279 [2024-04-17 06:56:43.714442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.714580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.714604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.279 qpair failed and we were unable to recover it. 00:30:39.279 [2024-04-17 06:56:43.714756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.714882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.279 [2024-04-17 06:56:43.714912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.280 qpair failed and we were unable to recover it. 00:30:39.280 [2024-04-17 06:56:43.715068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.715218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.715244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.280 qpair failed and we were unable to recover it. 00:30:39.280 [2024-04-17 06:56:43.715372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.715503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.715528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.280 qpair failed and we were unable to recover it. 00:30:39.280 [2024-04-17 06:56:43.715708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.715858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.715882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.280 qpair failed and we were unable to recover it. 00:30:39.280 [2024-04-17 06:56:43.716039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.716217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.716242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.280 qpair failed and we were unable to recover it. 00:30:39.280 [2024-04-17 06:56:43.716377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.716536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.716560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.280 qpair failed and we were unable to recover it. 00:30:39.280 [2024-04-17 06:56:43.716692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.716845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.716870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.280 qpair failed and we were unable to recover it. 00:30:39.280 [2024-04-17 06:56:43.717054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.717205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.717231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.280 qpair failed and we were unable to recover it. 00:30:39.280 [2024-04-17 06:56:43.717360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.717485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.717511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.280 qpair failed and we were unable to recover it. 00:30:39.280 [2024-04-17 06:56:43.717664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.717813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.717838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.280 qpair failed and we were unable to recover it. 00:30:39.280 [2024-04-17 06:56:43.717985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.718107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.718137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.280 qpair failed and we were unable to recover it. 00:30:39.280 [2024-04-17 06:56:43.718314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.718471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.718496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.280 qpair failed and we were unable to recover it. 00:30:39.280 [2024-04-17 06:56:43.718623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.718753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.718778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.280 qpair failed and we were unable to recover it. 00:30:39.280 [2024-04-17 06:56:43.718959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.719142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.719167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.280 qpair failed and we were unable to recover it. 00:30:39.280 [2024-04-17 06:56:43.719327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.719449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.719473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.280 qpair failed and we were unable to recover it. 00:30:39.280 [2024-04-17 06:56:43.719608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.719765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.719790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.280 qpair failed and we were unable to recover it. 00:30:39.280 [2024-04-17 06:56:43.719944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.720100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.720125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.280 qpair failed and we were unable to recover it. 00:30:39.280 [2024-04-17 06:56:43.720255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.720412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.720436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.280 qpair failed and we were unable to recover it. 00:30:39.280 [2024-04-17 06:56:43.720586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.720707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.720731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.280 qpair failed and we were unable to recover it. 00:30:39.280 [2024-04-17 06:56:43.720914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.721061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.721085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.280 qpair failed and we were unable to recover it. 00:30:39.280 [2024-04-17 06:56:43.721210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.721333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.721362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.280 qpair failed and we were unable to recover it. 00:30:39.280 [2024-04-17 06:56:43.721488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.721633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.721657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.280 qpair failed and we were unable to recover it. 00:30:39.280 [2024-04-17 06:56:43.721788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.721972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.721997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.280 qpair failed and we were unable to recover it. 00:30:39.280 [2024-04-17 06:56:43.722150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.722275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.722300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.280 qpair failed and we were unable to recover it. 00:30:39.280 [2024-04-17 06:56:43.722420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.722546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.722572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.280 qpair failed and we were unable to recover it. 00:30:39.280 [2024-04-17 06:56:43.722730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.722855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.722881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.280 qpair failed and we were unable to recover it. 00:30:39.280 [2024-04-17 06:56:43.723046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.723230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.723256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.280 qpair failed and we were unable to recover it. 00:30:39.280 [2024-04-17 06:56:43.723392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.723523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.723549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.280 qpair failed and we were unable to recover it. 00:30:39.280 [2024-04-17 06:56:43.723708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.723890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.280 [2024-04-17 06:56:43.723915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.280 qpair failed and we were unable to recover it. 00:30:39.280 [2024-04-17 06:56:43.724038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.724226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.724251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.281 qpair failed and we were unable to recover it. 00:30:39.281 [2024-04-17 06:56:43.724411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.724572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.724600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.281 qpair failed and we were unable to recover it. 00:30:39.281 [2024-04-17 06:56:43.724750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.724910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.724935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.281 qpair failed and we were unable to recover it. 00:30:39.281 [2024-04-17 06:56:43.725122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.725264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.725290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.281 qpair failed and we were unable to recover it. 00:30:39.281 [2024-04-17 06:56:43.725446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.725595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.725620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.281 qpair failed and we were unable to recover it. 00:30:39.281 [2024-04-17 06:56:43.725744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.725926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.725951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.281 qpair failed and we were unable to recover it. 00:30:39.281 [2024-04-17 06:56:43.726092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.726252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.726278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.281 qpair failed and we were unable to recover it. 00:30:39.281 [2024-04-17 06:56:43.726407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.726547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.726571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.281 qpair failed and we were unable to recover it. 00:30:39.281 [2024-04-17 06:56:43.726720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.726841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.726866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.281 qpair failed and we were unable to recover it. 00:30:39.281 [2024-04-17 06:56:43.726988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.727113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.727139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.281 qpair failed and we were unable to recover it. 00:30:39.281 [2024-04-17 06:56:43.727276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.727436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.727461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.281 qpair failed and we were unable to recover it. 00:30:39.281 [2024-04-17 06:56:43.727642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.727827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.727852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.281 qpair failed and we were unable to recover it. 00:30:39.281 [2024-04-17 06:56:43.728010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.728170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.728224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.281 qpair failed and we were unable to recover it. 00:30:39.281 [2024-04-17 06:56:43.728350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.728502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.728527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.281 qpair failed and we were unable to recover it. 00:30:39.281 [2024-04-17 06:56:43.728683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.728811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.728835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.281 qpair failed and we were unable to recover it. 00:30:39.281 [2024-04-17 06:56:43.728990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.729117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.729142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.281 qpair failed and we were unable to recover it. 00:30:39.281 [2024-04-17 06:56:43.729271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.729417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.729442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.281 qpair failed and we were unable to recover it. 00:30:39.281 [2024-04-17 06:56:43.729622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.729748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.729773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.281 qpair failed and we were unable to recover it. 00:30:39.281 [2024-04-17 06:56:43.729930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.730054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.730078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.281 qpair failed and we were unable to recover it. 00:30:39.281 [2024-04-17 06:56:43.730213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.730340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.730365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.281 qpair failed and we were unable to recover it. 00:30:39.281 [2024-04-17 06:56:43.730516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.730698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.730722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.281 qpair failed and we were unable to recover it. 00:30:39.281 [2024-04-17 06:56:43.730881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.731004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.731028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.281 qpair failed and we were unable to recover it. 00:30:39.281 [2024-04-17 06:56:43.731156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.731306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.731329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.281 qpair failed and we were unable to recover it. 00:30:39.281 [2024-04-17 06:56:43.731485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.731644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.731668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.281 qpair failed and we were unable to recover it. 00:30:39.281 [2024-04-17 06:56:43.731793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.731912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.731937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.281 qpair failed and we were unable to recover it. 00:30:39.281 [2024-04-17 06:56:43.732076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.732247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.732272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.281 qpair failed and we were unable to recover it. 00:30:39.281 [2024-04-17 06:56:43.732396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.732525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.732549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.281 qpair failed and we were unable to recover it. 00:30:39.281 [2024-04-17 06:56:43.732708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.732826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.732850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.281 qpair failed and we were unable to recover it. 00:30:39.281 [2024-04-17 06:56:43.732976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.733109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.281 [2024-04-17 06:56:43.733133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.281 qpair failed and we were unable to recover it. 00:30:39.282 [2024-04-17 06:56:43.733263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.733384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.733409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.282 qpair failed and we were unable to recover it. 00:30:39.282 [2024-04-17 06:56:43.733571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.733693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.733720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.282 qpair failed and we were unable to recover it. 00:30:39.282 [2024-04-17 06:56:43.733852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.733981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.734005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.282 qpair failed and we were unable to recover it. 00:30:39.282 [2024-04-17 06:56:43.734198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.734340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.734365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.282 qpair failed and we were unable to recover it. 00:30:39.282 [2024-04-17 06:56:43.734517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.734669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.734694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.282 qpair failed and we were unable to recover it. 00:30:39.282 [2024-04-17 06:56:43.734820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.734957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.734983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.282 qpair failed and we were unable to recover it. 00:30:39.282 [2024-04-17 06:56:43.735132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.735293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.735318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.282 qpair failed and we were unable to recover it. 00:30:39.282 [2024-04-17 06:56:43.735467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.735614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.735639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.282 qpair failed and we were unable to recover it. 00:30:39.282 [2024-04-17 06:56:43.735793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.735926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.735953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.282 qpair failed and we were unable to recover it. 00:30:39.282 [2024-04-17 06:56:43.736073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.736220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.736247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.282 qpair failed and we were unable to recover it. 00:30:39.282 [2024-04-17 06:56:43.736405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.736551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.736576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.282 qpair failed and we were unable to recover it. 00:30:39.282 [2024-04-17 06:56:43.736714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.736843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.736868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.282 qpair failed and we were unable to recover it. 00:30:39.282 [2024-04-17 06:56:43.737027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.737156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.737187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.282 qpair failed and we were unable to recover it. 00:30:39.282 [2024-04-17 06:56:43.737376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.737501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.737526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.282 qpair failed and we were unable to recover it. 00:30:39.282 [2024-04-17 06:56:43.737679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.737895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.737920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.282 qpair failed and we were unable to recover it. 00:30:39.282 [2024-04-17 06:56:43.738074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.738234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.738260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.282 qpair failed and we were unable to recover it. 00:30:39.282 [2024-04-17 06:56:43.738387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.738542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.738568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.282 qpair failed and we were unable to recover it. 00:30:39.282 [2024-04-17 06:56:43.738727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.738884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.738909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.282 qpair failed and we were unable to recover it. 00:30:39.282 [2024-04-17 06:56:43.739058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.739187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.739213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.282 qpair failed and we were unable to recover it. 00:30:39.282 [2024-04-17 06:56:43.739369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.739503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.739528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.282 qpair failed and we were unable to recover it. 00:30:39.282 [2024-04-17 06:56:43.739661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.739785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.739811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.282 qpair failed and we were unable to recover it. 00:30:39.282 [2024-04-17 06:56:43.739971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.740092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.740117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.282 qpair failed and we were unable to recover it. 00:30:39.282 [2024-04-17 06:56:43.740243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.740370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.740395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.282 qpair failed and we were unable to recover it. 00:30:39.282 [2024-04-17 06:56:43.740556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.740712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.282 [2024-04-17 06:56:43.740737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.282 qpair failed and we were unable to recover it. 00:30:39.282 [2024-04-17 06:56:43.740886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.741016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.741042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.283 qpair failed and we were unable to recover it. 00:30:39.283 [2024-04-17 06:56:43.741201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.741359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.741385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.283 qpair failed and we were unable to recover it. 00:30:39.283 [2024-04-17 06:56:43.741548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.741675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.741699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.283 qpair failed and we were unable to recover it. 00:30:39.283 [2024-04-17 06:56:43.741857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.742013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.742037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.283 qpair failed and we were unable to recover it. 00:30:39.283 [2024-04-17 06:56:43.742181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.742345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.742371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.283 qpair failed and we were unable to recover it. 00:30:39.283 [2024-04-17 06:56:43.742532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.742691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.742715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.283 qpair failed and we were unable to recover it. 00:30:39.283 [2024-04-17 06:56:43.742897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.743046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.743070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.283 qpair failed and we were unable to recover it. 00:30:39.283 [2024-04-17 06:56:43.743218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.743350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.743376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.283 qpair failed and we were unable to recover it. 00:30:39.283 [2024-04-17 06:56:43.743538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.743669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.743694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.283 qpair failed and we were unable to recover it. 00:30:39.283 [2024-04-17 06:56:43.743829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.743984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.744009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.283 qpair failed and we were unable to recover it. 00:30:39.283 [2024-04-17 06:56:43.744166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.744326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.744350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.283 qpair failed and we were unable to recover it. 00:30:39.283 [2024-04-17 06:56:43.744478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.744624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.744650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.283 qpair failed and we were unable to recover it. 00:30:39.283 [2024-04-17 06:56:43.744806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.744931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.744956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.283 qpair failed and we were unable to recover it. 00:30:39.283 [2024-04-17 06:56:43.745087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.745246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.745272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.283 qpair failed and we were unable to recover it. 00:30:39.283 [2024-04-17 06:56:43.745431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.745554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.745578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.283 qpair failed and we were unable to recover it. 00:30:39.283 [2024-04-17 06:56:43.745707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.745891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.745916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.283 qpair failed and we were unable to recover it. 00:30:39.283 [2024-04-17 06:56:43.746037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.746161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.746201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.283 qpair failed and we were unable to recover it. 00:30:39.283 [2024-04-17 06:56:43.746364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.746495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.746519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.283 qpair failed and we were unable to recover it. 00:30:39.283 [2024-04-17 06:56:43.746677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.746803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.746828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.283 qpair failed and we were unable to recover it. 00:30:39.283 [2024-04-17 06:56:43.746958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.747108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.747132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.283 qpair failed and we were unable to recover it. 00:30:39.283 [2024-04-17 06:56:43.747292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.747454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.747478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.283 qpair failed and we were unable to recover it. 00:30:39.283 [2024-04-17 06:56:43.747662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.747825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.747851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.283 qpair failed and we were unable to recover it. 00:30:39.283 [2024-04-17 06:56:43.747971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.748121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.748145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.283 qpair failed and we were unable to recover it. 00:30:39.283 [2024-04-17 06:56:43.748313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.748442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.748466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.283 qpair failed and we were unable to recover it. 00:30:39.283 [2024-04-17 06:56:43.748598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.748748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.748772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.283 qpair failed and we were unable to recover it. 00:30:39.283 [2024-04-17 06:56:43.748896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.749052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.749076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.283 qpair failed and we were unable to recover it. 00:30:39.283 [2024-04-17 06:56:43.749219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.749343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.749368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.283 qpair failed and we were unable to recover it. 00:30:39.283 [2024-04-17 06:56:43.749495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.749618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.749642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.283 qpair failed and we were unable to recover it. 00:30:39.283 [2024-04-17 06:56:43.749801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.749942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.283 [2024-04-17 06:56:43.749967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.284 qpair failed and we were unable to recover it. 00:30:39.284 [2024-04-17 06:56:43.750147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.750277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.750303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.284 qpair failed and we were unable to recover it. 00:30:39.284 [2024-04-17 06:56:43.750429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.750589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.750616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.284 qpair failed and we were unable to recover it. 00:30:39.284 [2024-04-17 06:56:43.750766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.750888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.750915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.284 qpair failed and we were unable to recover it. 00:30:39.284 [2024-04-17 06:56:43.751044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.751194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.751220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.284 qpair failed and we were unable to recover it. 00:30:39.284 [2024-04-17 06:56:43.751354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.751470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.751494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.284 qpair failed and we were unable to recover it. 00:30:39.284 [2024-04-17 06:56:43.751647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.751782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.751807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.284 qpair failed and we were unable to recover it. 00:30:39.284 [2024-04-17 06:56:43.751964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.752097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.752123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.284 qpair failed and we were unable to recover it. 00:30:39.284 [2024-04-17 06:56:43.752244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.752398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.752422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.284 qpair failed and we were unable to recover it. 00:30:39.284 [2024-04-17 06:56:43.752578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.752711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.752735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.284 qpair failed and we were unable to recover it. 00:30:39.284 [2024-04-17 06:56:43.752908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.753066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.753090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.284 qpair failed and we were unable to recover it. 00:30:39.284 [2024-04-17 06:56:43.753227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.753353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.753378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.284 qpair failed and we were unable to recover it. 00:30:39.284 [2024-04-17 06:56:43.753561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.753680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.753705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.284 qpair failed and we were unable to recover it. 00:30:39.284 [2024-04-17 06:56:43.753831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.754024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.754049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.284 qpair failed and we were unable to recover it. 00:30:39.284 [2024-04-17 06:56:43.754189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.754320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.754347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.284 qpair failed and we were unable to recover it. 00:30:39.284 [2024-04-17 06:56:43.754499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.754648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.754673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.284 qpair failed and we were unable to recover it. 00:30:39.284 [2024-04-17 06:56:43.754829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.754949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.754974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.284 qpair failed and we were unable to recover it. 00:30:39.284 [2024-04-17 06:56:43.755096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.755254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.755280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.284 qpair failed and we were unable to recover it. 00:30:39.284 [2024-04-17 06:56:43.755403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.755529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.755553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.284 qpair failed and we were unable to recover it. 00:30:39.284 [2024-04-17 06:56:43.755705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.755859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.755883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.284 qpair failed and we were unable to recover it. 00:30:39.284 [2024-04-17 06:56:43.756009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.756134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.756159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.284 qpair failed and we were unable to recover it. 00:30:39.284 [2024-04-17 06:56:43.756331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.756463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.756487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.284 qpair failed and we were unable to recover it. 00:30:39.284 [2024-04-17 06:56:43.756639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.756761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.756786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.284 qpair failed and we were unable to recover it. 00:30:39.284 [2024-04-17 06:56:43.756908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.757047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.757070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.284 qpair failed and we were unable to recover it. 00:30:39.284 [2024-04-17 06:56:43.757231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.757361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.757386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.284 qpair failed and we were unable to recover it. 00:30:39.284 [2024-04-17 06:56:43.757550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.757710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.757735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.284 qpair failed and we were unable to recover it. 00:30:39.284 [2024-04-17 06:56:43.757887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.758043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.758068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.284 qpair failed and we were unable to recover it. 00:30:39.284 [2024-04-17 06:56:43.758228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.758353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.758377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.284 qpair failed and we were unable to recover it. 00:30:39.284 [2024-04-17 06:56:43.758501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.758681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.758706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.284 qpair failed and we were unable to recover it. 00:30:39.284 [2024-04-17 06:56:43.758856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.284 [2024-04-17 06:56:43.758979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.759003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.285 qpair failed and we were unable to recover it. 00:30:39.285 [2024-04-17 06:56:43.759123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.759260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.759285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.285 qpair failed and we were unable to recover it. 00:30:39.285 [2024-04-17 06:56:43.759420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.759599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.759624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.285 qpair failed and we were unable to recover it. 00:30:39.285 [2024-04-17 06:56:43.759752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.759901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.759926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.285 qpair failed and we were unable to recover it. 00:30:39.285 [2024-04-17 06:56:43.760082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.760215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.760242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.285 qpair failed and we were unable to recover it. 00:30:39.285 [2024-04-17 06:56:43.760371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.760494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.760520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.285 qpair failed and we were unable to recover it. 00:30:39.285 [2024-04-17 06:56:43.760670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.760830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.760856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.285 qpair failed and we were unable to recover it. 00:30:39.285 [2024-04-17 06:56:43.761009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.761132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.761156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.285 qpair failed and we were unable to recover it. 00:30:39.285 [2024-04-17 06:56:43.761357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.761480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.761504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.285 qpair failed and we were unable to recover it. 00:30:39.285 [2024-04-17 06:56:43.761657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.761782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.761806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.285 qpair failed and we were unable to recover it. 00:30:39.285 [2024-04-17 06:56:43.761968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.762130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.762155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.285 qpair failed and we were unable to recover it. 00:30:39.285 [2024-04-17 06:56:43.762320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.762454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.762478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.285 qpair failed and we were unable to recover it. 00:30:39.285 [2024-04-17 06:56:43.762659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.762783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.762807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.285 qpair failed and we were unable to recover it. 00:30:39.285 [2024-04-17 06:56:43.762939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.763073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.763097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.285 qpair failed and we were unable to recover it. 00:30:39.285 [2024-04-17 06:56:43.763272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.763402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.763427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.285 qpair failed and we were unable to recover it. 00:30:39.285 [2024-04-17 06:56:43.763598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.763729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.763753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.285 qpair failed and we were unable to recover it. 00:30:39.285 [2024-04-17 06:56:43.763928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.764071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.764095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.285 qpair failed and we were unable to recover it. 00:30:39.285 [2024-04-17 06:56:43.764281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.764409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.764434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.285 qpair failed and we were unable to recover it. 00:30:39.285 [2024-04-17 06:56:43.764595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.764722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.764748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.285 qpair failed and we were unable to recover it. 00:30:39.285 [2024-04-17 06:56:43.764916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.765040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.765065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.285 qpair failed and we were unable to recover it. 00:30:39.285 [2024-04-17 06:56:43.765195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.765321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.765346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.285 qpair failed and we were unable to recover it. 00:30:39.285 [2024-04-17 06:56:43.765477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.765656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.765681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.285 qpair failed and we were unable to recover it. 00:30:39.285 [2024-04-17 06:56:43.765834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.766024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.766052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.285 qpair failed and we were unable to recover it. 00:30:39.285 [2024-04-17 06:56:43.766186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.766331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.766356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.285 qpair failed and we were unable to recover it. 00:30:39.285 [2024-04-17 06:56:43.766522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.766676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.766701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.285 qpair failed and we were unable to recover it. 00:30:39.285 [2024-04-17 06:56:43.766857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.767016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.767041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.285 qpair failed and we were unable to recover it. 00:30:39.285 [2024-04-17 06:56:43.767209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.767358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.767383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.285 qpair failed and we were unable to recover it. 00:30:39.285 [2024-04-17 06:56:43.767540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.767669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.767693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.285 qpair failed and we were unable to recover it. 00:30:39.285 [2024-04-17 06:56:43.767851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.767978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.285 [2024-04-17 06:56:43.768003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.285 qpair failed and we were unable to recover it. 00:30:39.285 [2024-04-17 06:56:43.768150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.768311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.768337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.286 qpair failed and we were unable to recover it. 00:30:39.286 [2024-04-17 06:56:43.768467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.768603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.768628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.286 qpair failed and we were unable to recover it. 00:30:39.286 [2024-04-17 06:56:43.768816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.768971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.768995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.286 qpair failed and we were unable to recover it. 00:30:39.286 [2024-04-17 06:56:43.769128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.769257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.769287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.286 qpair failed and we were unable to recover it. 00:30:39.286 [2024-04-17 06:56:43.769420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.769552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.769577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.286 qpair failed and we were unable to recover it. 00:30:39.286 [2024-04-17 06:56:43.769728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.769906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.769930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.286 qpair failed and we were unable to recover it. 00:30:39.286 [2024-04-17 06:56:43.770060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.770188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.770213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.286 qpair failed and we were unable to recover it. 00:30:39.286 [2024-04-17 06:56:43.770363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.770512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.770537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.286 qpair failed and we were unable to recover it. 00:30:39.286 [2024-04-17 06:56:43.770656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.770786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.770813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.286 qpair failed and we were unable to recover it. 00:30:39.286 [2024-04-17 06:56:43.770939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.771092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.771117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.286 qpair failed and we were unable to recover it. 00:30:39.286 [2024-04-17 06:56:43.771265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.771395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.771420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.286 qpair failed and we were unable to recover it. 00:30:39.286 [2024-04-17 06:56:43.771558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.771681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.771705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.286 qpair failed and we were unable to recover it. 00:30:39.286 [2024-04-17 06:56:43.771861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.771995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.772020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.286 qpair failed and we were unable to recover it. 00:30:39.286 [2024-04-17 06:56:43.772156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.772295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.772324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.286 qpair failed and we were unable to recover it. 00:30:39.286 [2024-04-17 06:56:43.772455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.772589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.772615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.286 qpair failed and we were unable to recover it. 00:30:39.286 [2024-04-17 06:56:43.772746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.772867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.772892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.286 qpair failed and we were unable to recover it. 00:30:39.286 [2024-04-17 06:56:43.773016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.773171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.773202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.286 qpair failed and we were unable to recover it. 00:30:39.286 [2024-04-17 06:56:43.773347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.773503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.773529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.286 qpair failed and we were unable to recover it. 00:30:39.286 [2024-04-17 06:56:43.773711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.773857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.773883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.286 qpair failed and we were unable to recover it. 00:30:39.286 [2024-04-17 06:56:43.774011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.774134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.774159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.286 qpair failed and we were unable to recover it. 00:30:39.286 [2024-04-17 06:56:43.774306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.774454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.774478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.286 qpair failed and we were unable to recover it. 00:30:39.286 [2024-04-17 06:56:43.774644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.774798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.774824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.286 qpair failed and we were unable to recover it. 00:30:39.286 [2024-04-17 06:56:43.774980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.775131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.775155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:30:39.286 qpair failed and we were unable to recover it. 00:30:39.286 [2024-04-17 06:56:43.775322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.775497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.775529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.286 qpair failed and we were unable to recover it. 00:30:39.286 [2024-04-17 06:56:43.775669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.775826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.775856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.286 qpair failed and we were unable to recover it. 00:30:39.286 [2024-04-17 06:56:43.776027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.776205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.776233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.286 qpair failed and we were unable to recover it. 00:30:39.286 [2024-04-17 06:56:43.776365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.776522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.776546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.286 qpair failed and we were unable to recover it. 00:30:39.286 [2024-04-17 06:56:43.776677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.776803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.776844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.286 qpair failed and we were unable to recover it. 00:30:39.286 [2024-04-17 06:56:43.777009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.777149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.286 [2024-04-17 06:56:43.777185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.286 qpair failed and we were unable to recover it. 00:30:39.287 [2024-04-17 06:56:43.777353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.777482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.777523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.287 qpair failed and we were unable to recover it. 00:30:39.287 [2024-04-17 06:56:43.777672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.777840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.777867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.287 qpair failed and we were unable to recover it. 00:30:39.287 [2024-04-17 06:56:43.778018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.778195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.778220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.287 qpair failed and we were unable to recover it. 00:30:39.287 [2024-04-17 06:56:43.778345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.778508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.778548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.287 qpair failed and we were unable to recover it. 00:30:39.287 [2024-04-17 06:56:43.778724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.778855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.778882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.287 qpair failed and we were unable to recover it. 00:30:39.287 [2024-04-17 06:56:43.779036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.779171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.779204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.287 qpair failed and we were unable to recover it. 00:30:39.287 [2024-04-17 06:56:43.779345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.779515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.779541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.287 qpair failed and we were unable to recover it. 00:30:39.287 [2024-04-17 06:56:43.779671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.779843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.779870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.287 qpair failed and we were unable to recover it. 00:30:39.287 [2024-04-17 06:56:43.780020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.780193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.780221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.287 qpair failed and we were unable to recover it. 00:30:39.287 [2024-04-17 06:56:43.780371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.780526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.780549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.287 qpair failed and we were unable to recover it. 00:30:39.287 [2024-04-17 06:56:43.780716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.780913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.780940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.287 qpair failed and we were unable to recover it. 00:30:39.287 [2024-04-17 06:56:43.781109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.781307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.781332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.287 qpair failed and we were unable to recover it. 00:30:39.287 [2024-04-17 06:56:43.781488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.781691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.781719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.287 qpair failed and we were unable to recover it. 00:30:39.287 [2024-04-17 06:56:43.781919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.782055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.782082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.287 qpair failed and we were unable to recover it. 00:30:39.287 [2024-04-17 06:56:43.782240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.782373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.782398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.287 qpair failed and we were unable to recover it. 00:30:39.287 [2024-04-17 06:56:43.782561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.782739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.782780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.287 qpair failed and we were unable to recover it. 00:30:39.287 [2024-04-17 06:56:43.782921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.783053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.783079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.287 qpair failed and we were unable to recover it. 00:30:39.287 [2024-04-17 06:56:43.783263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.783417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.783441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.287 qpair failed and we were unable to recover it. 00:30:39.287 [2024-04-17 06:56:43.783651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.783788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.783815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.287 qpair failed and we were unable to recover it. 00:30:39.287 [2024-04-17 06:56:43.783988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.784133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.784161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.287 qpair failed and we were unable to recover it. 00:30:39.287 [2024-04-17 06:56:43.784323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.784474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.784499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.287 qpair failed and we were unable to recover it. 00:30:39.287 [2024-04-17 06:56:43.784653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.784850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.784878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.287 qpair failed and we were unable to recover it. 00:30:39.287 [2024-04-17 06:56:43.785033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.785196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.785248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.287 qpair failed and we were unable to recover it. 00:30:39.287 [2024-04-17 06:56:43.785382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.785553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.287 [2024-04-17 06:56:43.785580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.287 qpair failed and we were unable to recover it. 00:30:39.287 [2024-04-17 06:56:43.785784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.785941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.785968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.288 qpair failed and we were unable to recover it. 00:30:39.288 [2024-04-17 06:56:43.786118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.786265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.786291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.288 qpair failed and we were unable to recover it. 00:30:39.288 [2024-04-17 06:56:43.786449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.786628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.786655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.288 qpair failed and we were unable to recover it. 00:30:39.288 [2024-04-17 06:56:43.786806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.786987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.787013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.288 qpair failed and we were unable to recover it. 00:30:39.288 [2024-04-17 06:56:43.787226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.787365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.787389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.288 qpair failed and we were unable to recover it. 00:30:39.288 [2024-04-17 06:56:43.787590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.787758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.787784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.288 qpair failed and we were unable to recover it. 00:30:39.288 [2024-04-17 06:56:43.787928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.788057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.788084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.288 qpair failed and we were unable to recover it. 00:30:39.288 [2024-04-17 06:56:43.788243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.788374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.788399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.288 qpair failed and we were unable to recover it. 00:30:39.288 [2024-04-17 06:56:43.788547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.788687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.788713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.288 qpair failed and we were unable to recover it. 00:30:39.288 [2024-04-17 06:56:43.788878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.789010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.789036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.288 qpair failed and we were unable to recover it. 00:30:39.288 [2024-04-17 06:56:43.789208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.789359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.789384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.288 qpair failed and we were unable to recover it. 00:30:39.288 [2024-04-17 06:56:43.789535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.789664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.789691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.288 qpair failed and we were unable to recover it. 00:30:39.288 [2024-04-17 06:56:43.789858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.790019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.790046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.288 qpair failed and we were unable to recover it. 00:30:39.288 [2024-04-17 06:56:43.790192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.790374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.790399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.288 qpair failed and we were unable to recover it. 00:30:39.288 [2024-04-17 06:56:43.790560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.790721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.790748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.288 qpair failed and we were unable to recover it. 00:30:39.288 [2024-04-17 06:56:43.790928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.791076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.791104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.288 qpair failed and we were unable to recover it. 00:30:39.288 [2024-04-17 06:56:43.791316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.791441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.791484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.288 qpair failed and we were unable to recover it. 00:30:39.288 [2024-04-17 06:56:43.791624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.791768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.791795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.288 qpair failed and we were unable to recover it. 00:30:39.288 [2024-04-17 06:56:43.792030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.792205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.792229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.288 qpair failed and we were unable to recover it. 00:30:39.288 [2024-04-17 06:56:43.792358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.792519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.792546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.288 qpair failed and we were unable to recover it. 00:30:39.288 [2024-04-17 06:56:43.792681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.792852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.792878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.288 qpair failed and we were unable to recover it. 00:30:39.288 [2024-04-17 06:56:43.793029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.793197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.793239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.288 qpair failed and we were unable to recover it. 00:30:39.288 [2024-04-17 06:56:43.793410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.793564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.793607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.288 qpair failed and we were unable to recover it. 00:30:39.288 [2024-04-17 06:56:43.793749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.793914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.793940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.288 qpair failed and we were unable to recover it. 00:30:39.288 [2024-04-17 06:56:43.794082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.794215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.794240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.288 qpair failed and we were unable to recover it. 00:30:39.288 [2024-04-17 06:56:43.794394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.794595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.794619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.288 qpair failed and we were unable to recover it. 00:30:39.288 [2024-04-17 06:56:43.794804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.794969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.794995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.288 qpair failed and we were unable to recover it. 00:30:39.288 [2024-04-17 06:56:43.795150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.795317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.795343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.288 qpair failed and we were unable to recover it. 00:30:39.288 [2024-04-17 06:56:43.795495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.795633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.288 [2024-04-17 06:56:43.795660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.289 qpair failed and we were unable to recover it. 00:30:39.289 [2024-04-17 06:56:43.795856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.795991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.796017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.289 qpair failed and we were unable to recover it. 00:30:39.289 [2024-04-17 06:56:43.796197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.796329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.796354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.289 qpair failed and we were unable to recover it. 00:30:39.289 [2024-04-17 06:56:43.796483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.796637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.796662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.289 qpair failed and we were unable to recover it. 00:30:39.289 [2024-04-17 06:56:43.796855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.796982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.797005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.289 qpair failed and we were unable to recover it. 00:30:39.289 [2024-04-17 06:56:43.797134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.797284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.797326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.289 qpair failed and we were unable to recover it. 00:30:39.289 [2024-04-17 06:56:43.797502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.797639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.797663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.289 qpair failed and we were unable to recover it. 00:30:39.289 [2024-04-17 06:56:43.797848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.797986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.798012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.289 qpair failed and we were unable to recover it. 00:30:39.289 [2024-04-17 06:56:43.798171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.798306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.798331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.289 qpair failed and we were unable to recover it. 00:30:39.289 [2024-04-17 06:56:43.798489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.798648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.798675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.289 qpair failed and we were unable to recover it. 00:30:39.289 [2024-04-17 06:56:43.798820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.798959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.798985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.289 qpair failed and we were unable to recover it. 00:30:39.289 [2024-04-17 06:56:43.799131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.799261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.799286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.289 qpair failed and we were unable to recover it. 00:30:39.289 [2024-04-17 06:56:43.799419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.799601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.799627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.289 qpair failed and we were unable to recover it. 00:30:39.289 [2024-04-17 06:56:43.799802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.799997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.800030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.289 qpair failed and we were unable to recover it. 00:30:39.289 [2024-04-17 06:56:43.800210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.800365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.800390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.289 qpair failed and we were unable to recover it. 00:30:39.289 [2024-04-17 06:56:43.800571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.800737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.800764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.289 qpair failed and we were unable to recover it. 00:30:39.289 [2024-04-17 06:56:43.800899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.801046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.801073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.289 qpair failed and we were unable to recover it. 00:30:39.289 [2024-04-17 06:56:43.801247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.801449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.801475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.289 qpair failed and we were unable to recover it. 00:30:39.289 [2024-04-17 06:56:43.801613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.801806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.801831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.289 qpair failed and we were unable to recover it. 00:30:39.289 [2024-04-17 06:56:43.802007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.802152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.802185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.289 qpair failed and we were unable to recover it. 00:30:39.289 [2024-04-17 06:56:43.802389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.802619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.802666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.289 qpair failed and we were unable to recover it. 00:30:39.289 [2024-04-17 06:56:43.802851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.802988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.803017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.289 qpair failed and we were unable to recover it. 00:30:39.289 [2024-04-17 06:56:43.803195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.803339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.803363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.289 qpair failed and we were unable to recover it. 00:30:39.289 [2024-04-17 06:56:43.803542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.803748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.803792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.289 qpair failed and we were unable to recover it. 00:30:39.289 [2024-04-17 06:56:43.803969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.804103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.804131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.289 qpair failed and we were unable to recover it. 00:30:39.289 [2024-04-17 06:56:43.804322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.804456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.804480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.289 qpair failed and we were unable to recover it. 00:30:39.289 [2024-04-17 06:56:43.804663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.804848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.804873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.289 qpair failed and we were unable to recover it. 00:30:39.289 [2024-04-17 06:56:43.805000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.805146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.805174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.289 qpair failed and we were unable to recover it. 00:30:39.289 [2024-04-17 06:56:43.805328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.805482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.805522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.289 qpair failed and we were unable to recover it. 00:30:39.289 [2024-04-17 06:56:43.805699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.289 [2024-04-17 06:56:43.805871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.805899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.290 qpair failed and we were unable to recover it. 00:30:39.290 [2024-04-17 06:56:43.806050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.806186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.806213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.290 qpair failed and we were unable to recover it. 00:30:39.290 [2024-04-17 06:56:43.806384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.806594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.806639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.290 qpair failed and we were unable to recover it. 00:30:39.290 [2024-04-17 06:56:43.806791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.806969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.807010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.290 qpair failed and we were unable to recover it. 00:30:39.290 [2024-04-17 06:56:43.807150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.807304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.807339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.290 qpair failed and we were unable to recover it. 00:30:39.290 [2024-04-17 06:56:43.807544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.807686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.807712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.290 qpair failed and we were unable to recover it. 00:30:39.290 [2024-04-17 06:56:43.807895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.808028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.808069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.290 qpair failed and we were unable to recover it. 00:30:39.290 [2024-04-17 06:56:43.808255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.808380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.808404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.290 qpair failed and we were unable to recover it. 00:30:39.290 [2024-04-17 06:56:43.808548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.808694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.808721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.290 qpair failed and we were unable to recover it. 00:30:39.290 [2024-04-17 06:56:43.808868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.808993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.809017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.290 qpair failed and we were unable to recover it. 00:30:39.290 [2024-04-17 06:56:43.809197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.809394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.809418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.290 qpair failed and we were unable to recover it. 00:30:39.290 [2024-04-17 06:56:43.809598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.809776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.809804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.290 qpair failed and we were unable to recover it. 00:30:39.290 [2024-04-17 06:56:43.810000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.810149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.810173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.290 qpair failed and we were unable to recover it. 00:30:39.290 [2024-04-17 06:56:43.810298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.810423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.810447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.290 qpair failed and we were unable to recover it. 00:30:39.290 [2024-04-17 06:56:43.810624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.810854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.810899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.290 qpair failed and we were unable to recover it. 00:30:39.290 [2024-04-17 06:56:43.811055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.811213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.811238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.290 qpair failed and we were unable to recover it. 00:30:39.290 [2024-04-17 06:56:43.811364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.811547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.811574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.290 qpair failed and we were unable to recover it. 00:30:39.290 [2024-04-17 06:56:43.811734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.811889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.811913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.290 qpair failed and we were unable to recover it. 00:30:39.290 [2024-04-17 06:56:43.812079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.812211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.812236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.290 qpair failed and we were unable to recover it. 00:30:39.290 [2024-04-17 06:56:43.812391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.812548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.812587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.290 qpair failed and we were unable to recover it. 00:30:39.290 [2024-04-17 06:56:43.812754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.812965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.812992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.290 qpair failed and we were unable to recover it. 00:30:39.290 [2024-04-17 06:56:43.813182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.813335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.813359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.290 qpair failed and we were unable to recover it. 00:30:39.290 [2024-04-17 06:56:43.813510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.813695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.813719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.290 qpair failed and we were unable to recover it. 00:30:39.290 [2024-04-17 06:56:43.813867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.814017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.814057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.290 qpair failed and we were unable to recover it. 00:30:39.290 [2024-04-17 06:56:43.814243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.814405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.814429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.290 qpair failed and we were unable to recover it. 00:30:39.290 [2024-04-17 06:56:43.814555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.814685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.814709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.290 qpair failed and we were unable to recover it. 00:30:39.290 [2024-04-17 06:56:43.814901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.815039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.815063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.290 qpair failed and we were unable to recover it. 00:30:39.290 [2024-04-17 06:56:43.815231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.815361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.815385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.290 qpair failed and we were unable to recover it. 00:30:39.290 [2024-04-17 06:56:43.815556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.815708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.815733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.290 qpair failed and we were unable to recover it. 00:30:39.290 [2024-04-17 06:56:43.815915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.290 [2024-04-17 06:56:43.816074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.816101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.291 qpair failed and we were unable to recover it. 00:30:39.291 [2024-04-17 06:56:43.816283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.816440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.816482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.291 qpair failed and we were unable to recover it. 00:30:39.291 [2024-04-17 06:56:43.816620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.816745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.816772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.291 qpair failed and we were unable to recover it. 00:30:39.291 [2024-04-17 06:56:43.816902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.817098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.817125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.291 qpair failed and we were unable to recover it. 00:30:39.291 [2024-04-17 06:56:43.817273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.817424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.817448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.291 qpair failed and we were unable to recover it. 00:30:39.291 [2024-04-17 06:56:43.817582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.817727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.817754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.291 qpair failed and we were unable to recover it. 00:30:39.291 [2024-04-17 06:56:43.817947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.818098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.818122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.291 qpair failed and we were unable to recover it. 00:30:39.291 [2024-04-17 06:56:43.818298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.818449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.818473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.291 qpair failed and we were unable to recover it. 00:30:39.291 [2024-04-17 06:56:43.818644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.818805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.818848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.291 qpair failed and we were unable to recover it. 00:30:39.291 [2024-04-17 06:56:43.819015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.819185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.819213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.291 qpair failed and we were unable to recover it. 00:30:39.291 [2024-04-17 06:56:43.819373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.819528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.819552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.291 qpair failed and we were unable to recover it. 00:30:39.291 [2024-04-17 06:56:43.819700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.819855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.819879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.291 qpair failed and we were unable to recover it. 00:30:39.291 [2024-04-17 06:56:43.820059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.820242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.820270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.291 qpair failed and we were unable to recover it. 00:30:39.291 [2024-04-17 06:56:43.820443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.820625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.820667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.291 qpair failed and we were unable to recover it. 00:30:39.291 [2024-04-17 06:56:43.820807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.820941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.820968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.291 qpair failed and we were unable to recover it. 00:30:39.291 [2024-04-17 06:56:43.821099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.821239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.821268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.291 qpair failed and we were unable to recover it. 00:30:39.291 [2024-04-17 06:56:43.821417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.821603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.821643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.291 qpair failed and we were unable to recover it. 00:30:39.291 [2024-04-17 06:56:43.821779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.821950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.821977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.291 qpair failed and we were unable to recover it. 00:30:39.291 [2024-04-17 06:56:43.822139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.822280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.822308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.291 qpair failed and we were unable to recover it. 00:30:39.291 [2024-04-17 06:56:43.822488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.822665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.822691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.291 qpair failed and we were unable to recover it. 00:30:39.291 [2024-04-17 06:56:43.822825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.822983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.823011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.291 qpair failed and we were unable to recover it. 00:30:39.291 [2024-04-17 06:56:43.823183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.823326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.823353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.291 qpair failed and we were unable to recover it. 00:30:39.291 [2024-04-17 06:56:43.823499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.823684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.823727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.291 qpair failed and we were unable to recover it. 00:30:39.291 [2024-04-17 06:56:43.823904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.824040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.824067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.291 qpair failed and we were unable to recover it. 00:30:39.291 [2024-04-17 06:56:43.824244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.291 [2024-04-17 06:56:43.824421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.824448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.292 qpair failed and we were unable to recover it. 00:30:39.292 [2024-04-17 06:56:43.824597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.824751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.824775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.292 qpair failed and we were unable to recover it. 00:30:39.292 [2024-04-17 06:56:43.824965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.825108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.825141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.292 qpair failed and we were unable to recover it. 00:30:39.292 [2024-04-17 06:56:43.825306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.825432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.825457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.292 qpair failed and we were unable to recover it. 00:30:39.292 [2024-04-17 06:56:43.825620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.825747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.825771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.292 qpair failed and we were unable to recover it. 00:30:39.292 [2024-04-17 06:56:43.825913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.826110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.826137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.292 qpair failed and we were unable to recover it. 00:30:39.292 [2024-04-17 06:56:43.826292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.826445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.826471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.292 qpair failed and we were unable to recover it. 00:30:39.292 [2024-04-17 06:56:43.826595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.826754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.826778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.292 qpair failed and we were unable to recover it. 00:30:39.292 [2024-04-17 06:56:43.826912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.827056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.827083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.292 qpair failed and we were unable to recover it. 00:30:39.292 [2024-04-17 06:56:43.827227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.827372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.827398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.292 qpair failed and we were unable to recover it. 00:30:39.292 [2024-04-17 06:56:43.827576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.827694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.827734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.292 qpair failed and we were unable to recover it. 00:30:39.292 [2024-04-17 06:56:43.827883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.828021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.828048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.292 qpair failed and we were unable to recover it. 00:30:39.292 [2024-04-17 06:56:43.828227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.828393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.828425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.292 qpair failed and we were unable to recover it. 00:30:39.292 [2024-04-17 06:56:43.828580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.828733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.828758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.292 qpair failed and we were unable to recover it. 00:30:39.292 [2024-04-17 06:56:43.828931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.829092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.829119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.292 qpair failed and we were unable to recover it. 00:30:39.292 [2024-04-17 06:56:43.829259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.829394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.829420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.292 qpair failed and we were unable to recover it. 00:30:39.292 [2024-04-17 06:56:43.829572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.829718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.829743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.292 qpair failed and we were unable to recover it. 00:30:39.292 [2024-04-17 06:56:43.829908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.830094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.830118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.292 qpair failed and we were unable to recover it. 00:30:39.292 [2024-04-17 06:56:43.830258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.830411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.830435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.292 qpair failed and we were unable to recover it. 00:30:39.292 [2024-04-17 06:56:43.830590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.830714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.830738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.292 qpair failed and we were unable to recover it. 00:30:39.292 [2024-04-17 06:56:43.830922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.831060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.831086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.292 qpair failed and we were unable to recover it. 00:30:39.292 [2024-04-17 06:56:43.831247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.831412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.831439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.292 qpair failed and we were unable to recover it. 00:30:39.292 [2024-04-17 06:56:43.831596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.831762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.831786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.292 qpair failed and we were unable to recover it. 00:30:39.292 [2024-04-17 06:56:43.831943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.832100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.832142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.292 qpair failed and we were unable to recover it. 00:30:39.292 [2024-04-17 06:56:43.832316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.832438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.832462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.292 qpair failed and we were unable to recover it. 00:30:39.292 [2024-04-17 06:56:43.832594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.832736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.832761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.292 qpair failed and we were unable to recover it. 00:30:39.292 [2024-04-17 06:56:43.832946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.833072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.833096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.292 qpair failed and we were unable to recover it. 00:30:39.292 [2024-04-17 06:56:43.833223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.833344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.833368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.292 qpair failed and we were unable to recover it. 00:30:39.292 [2024-04-17 06:56:43.833497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.833655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.833679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.292 qpair failed and we were unable to recover it. 00:30:39.292 [2024-04-17 06:56:43.833850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.834030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.292 [2024-04-17 06:56:43.834054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.292 qpair failed and we were unable to recover it. 00:30:39.292 [2024-04-17 06:56:43.834223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.834359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.834385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.293 qpair failed and we were unable to recover it. 00:30:39.293 [2024-04-17 06:56:43.834535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.834697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.834722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.293 qpair failed and we were unable to recover it. 00:30:39.293 [2024-04-17 06:56:43.834850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.835026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.835054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.293 qpair failed and we were unable to recover it. 00:30:39.293 [2024-04-17 06:56:43.835209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.835333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.835357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.293 qpair failed and we were unable to recover it. 00:30:39.293 [2024-04-17 06:56:43.835491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.835616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.835640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.293 qpair failed and we were unable to recover it. 00:30:39.293 [2024-04-17 06:56:43.835793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.835937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.835964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.293 qpair failed and we were unable to recover it. 00:30:39.293 [2024-04-17 06:56:43.836144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.836289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.836316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.293 qpair failed and we were unable to recover it. 00:30:39.293 [2024-04-17 06:56:43.836468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.836599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.836624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.293 qpair failed and we were unable to recover it. 00:30:39.293 [2024-04-17 06:56:43.836805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.836942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.836969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.293 qpair failed and we were unable to recover it. 00:30:39.293 [2024-04-17 06:56:43.837136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.837349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.837374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.293 qpair failed and we were unable to recover it. 00:30:39.293 [2024-04-17 06:56:43.837508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.837663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.837687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.293 qpair failed and we were unable to recover it. 00:30:39.293 [2024-04-17 06:56:43.837846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.838023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.838047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.293 qpair failed and we were unable to recover it. 00:30:39.293 [2024-04-17 06:56:43.838187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.838363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.838401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.293 qpair failed and we were unable to recover it. 00:30:39.293 [2024-04-17 06:56:43.838599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.838729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.838755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.293 qpair failed and we were unable to recover it. 00:30:39.293 [2024-04-17 06:56:43.838891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.839073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.839103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.293 qpair failed and we were unable to recover it. 00:30:39.293 [2024-04-17 06:56:43.839248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.839399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.839428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.293 qpair failed and we were unable to recover it. 00:30:39.293 [2024-04-17 06:56:43.839583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.839714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.839738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.293 qpair failed and we were unable to recover it. 00:30:39.293 [2024-04-17 06:56:43.839895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.840032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.840060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.293 qpair failed and we were unable to recover it. 00:30:39.293 [2024-04-17 06:56:43.840246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.840402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.840429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.293 qpair failed and we were unable to recover it. 00:30:39.293 [2024-04-17 06:56:43.840564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.840714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.840741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.293 qpair failed and we were unable to recover it. 00:30:39.293 [2024-04-17 06:56:43.840883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.841023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.841049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.293 qpair failed and we were unable to recover it. 00:30:39.293 [2024-04-17 06:56:43.841212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.841376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.841405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.293 qpair failed and we were unable to recover it. 00:30:39.293 [2024-04-17 06:56:43.841581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.841729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.841768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.293 qpair failed and we were unable to recover it. 00:30:39.293 [2024-04-17 06:56:43.841910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.842051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.842079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.293 qpair failed and we were unable to recover it. 00:30:39.293 [2024-04-17 06:56:43.842260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.842400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.842441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.293 qpair failed and we were unable to recover it. 00:30:39.293 [2024-04-17 06:56:43.842623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.842781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.293 [2024-04-17 06:56:43.842806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.293 qpair failed and we were unable to recover it. 00:30:39.566 [2024-04-17 06:56:43.842962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.566 [2024-04-17 06:56:43.843094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.843119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.567 qpair failed and we were unable to recover it. 00:30:39.567 [2024-04-17 06:56:43.843287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.843434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.843462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.567 qpair failed and we were unable to recover it. 00:30:39.567 [2024-04-17 06:56:43.843658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.843810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.843835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.567 qpair failed and we were unable to recover it. 00:30:39.567 [2024-04-17 06:56:43.843956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.844100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.844136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.567 qpair failed and we were unable to recover it. 00:30:39.567 [2024-04-17 06:56:43.844329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.844472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.844499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.567 qpair failed and we were unable to recover it. 00:30:39.567 [2024-04-17 06:56:43.844627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.844754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.844778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.567 qpair failed and we were unable to recover it. 00:30:39.567 [2024-04-17 06:56:43.844901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.845018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.845043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.567 qpair failed and we were unable to recover it. 00:30:39.567 [2024-04-17 06:56:43.845172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.845321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.845351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.567 qpair failed and we were unable to recover it. 00:30:39.567 [2024-04-17 06:56:43.845487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.845610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.845634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.567 qpair failed and we were unable to recover it. 00:30:39.567 [2024-04-17 06:56:43.845756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.845899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.845923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.567 qpair failed and we were unable to recover it. 00:30:39.567 [2024-04-17 06:56:43.846048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.846165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.846197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.567 qpair failed and we were unable to recover it. 00:30:39.567 [2024-04-17 06:56:43.846320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.846474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.846499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.567 qpair failed and we were unable to recover it. 00:30:39.567 [2024-04-17 06:56:43.846652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.846776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.846801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.567 qpair failed and we were unable to recover it. 00:30:39.567 [2024-04-17 06:56:43.846979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.847103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.847127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.567 qpair failed and we were unable to recover it. 00:30:39.567 [2024-04-17 06:56:43.847259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.847413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.847438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.567 qpair failed and we were unable to recover it. 00:30:39.567 [2024-04-17 06:56:43.847630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.847773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.847798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.567 qpair failed and we were unable to recover it. 00:30:39.567 [2024-04-17 06:56:43.847921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.848078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.848103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.567 qpair failed and we were unable to recover it. 00:30:39.567 [2024-04-17 06:56:43.848252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.848410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.848438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.567 qpair failed and we were unable to recover it. 00:30:39.567 [2024-04-17 06:56:43.848581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.848731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.848756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.567 qpair failed and we were unable to recover it. 00:30:39.567 [2024-04-17 06:56:43.848923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.849078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.849103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.567 qpair failed and we were unable to recover it. 00:30:39.567 [2024-04-17 06:56:43.849241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.849364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.849389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.567 qpair failed and we were unable to recover it. 00:30:39.567 [2024-04-17 06:56:43.849526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.849713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.849737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.567 qpair failed and we were unable to recover it. 00:30:39.567 [2024-04-17 06:56:43.849866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.850021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.850047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.567 qpair failed and we were unable to recover it. 00:30:39.567 [2024-04-17 06:56:43.850227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.850353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.850379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.567 qpair failed and we were unable to recover it. 00:30:39.567 [2024-04-17 06:56:43.850514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.850636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.850660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.567 qpair failed and we were unable to recover it. 00:30:39.567 [2024-04-17 06:56:43.850808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.850986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.851011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.567 qpair failed and we were unable to recover it. 00:30:39.567 [2024-04-17 06:56:43.851137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.851302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.851328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.567 qpair failed and we were unable to recover it. 00:30:39.567 [2024-04-17 06:56:43.851483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.567 [2024-04-17 06:56:43.851609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.851637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.568 qpair failed and we were unable to recover it. 00:30:39.568 [2024-04-17 06:56:43.851767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.851921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.851945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.568 qpair failed and we were unable to recover it. 00:30:39.568 [2024-04-17 06:56:43.852098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.852225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.852249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.568 qpair failed and we were unable to recover it. 00:30:39.568 [2024-04-17 06:56:43.852381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.852538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.852563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.568 qpair failed and we were unable to recover it. 00:30:39.568 [2024-04-17 06:56:43.852687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.852815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.852839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.568 qpair failed and we were unable to recover it. 00:30:39.568 [2024-04-17 06:56:43.852966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.853094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.853118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.568 qpair failed and we were unable to recover it. 00:30:39.568 [2024-04-17 06:56:43.853252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.853410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.853434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.568 qpair failed and we were unable to recover it. 00:30:39.568 [2024-04-17 06:56:43.853567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.853693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.853718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.568 qpair failed and we were unable to recover it. 00:30:39.568 [2024-04-17 06:56:43.853877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.854007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.854031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.568 qpair failed and we were unable to recover it. 00:30:39.568 [2024-04-17 06:56:43.854184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.854324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.854348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.568 qpair failed and we were unable to recover it. 00:30:39.568 [2024-04-17 06:56:43.854488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.854633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.854657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.568 qpair failed and we were unable to recover it. 00:30:39.568 [2024-04-17 06:56:43.854820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.854976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.855000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.568 qpair failed and we were unable to recover it. 00:30:39.568 [2024-04-17 06:56:43.855179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.855311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.855334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.568 qpair failed and we were unable to recover it. 00:30:39.568 [2024-04-17 06:56:43.855473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.855602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.855628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.568 qpair failed and we were unable to recover it. 00:30:39.568 [2024-04-17 06:56:43.855781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.855906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.855931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.568 qpair failed and we were unable to recover it. 00:30:39.568 [2024-04-17 06:56:43.856097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.856242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.856268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.568 qpair failed and we were unable to recover it. 00:30:39.568 [2024-04-17 06:56:43.856394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.856557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.856581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.568 qpair failed and we were unable to recover it. 00:30:39.568 [2024-04-17 06:56:43.856705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.856824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.856847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.568 qpair failed and we were unable to recover it. 00:30:39.568 [2024-04-17 06:56:43.856993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.857125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.857149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.568 qpair failed and we were unable to recover it. 00:30:39.568 [2024-04-17 06:56:43.857296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.857451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.857475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.568 qpair failed and we were unable to recover it. 00:30:39.568 [2024-04-17 06:56:43.857593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.857745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.857771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.568 qpair failed and we were unable to recover it. 00:30:39.568 [2024-04-17 06:56:43.857918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.858048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.858072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.568 qpair failed and we were unable to recover it. 00:30:39.568 [2024-04-17 06:56:43.858228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.858356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.858380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.568 qpair failed and we were unable to recover it. 00:30:39.568 [2024-04-17 06:56:43.858529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.858653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.858676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.568 qpair failed and we were unable to recover it. 00:30:39.568 [2024-04-17 06:56:43.858823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.858990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.859017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.568 qpair failed and we were unable to recover it. 00:30:39.568 [2024-04-17 06:56:43.859188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.859380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.859406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.568 qpair failed and we were unable to recover it. 00:30:39.568 [2024-04-17 06:56:43.859560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.859681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.859705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.568 qpair failed and we were unable to recover it. 00:30:39.568 [2024-04-17 06:56:43.859865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.860032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.860058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.568 qpair failed and we were unable to recover it. 00:30:39.568 [2024-04-17 06:56:43.860207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.860357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.568 [2024-04-17 06:56:43.860384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.569 qpair failed and we were unable to recover it. 00:30:39.569 [2024-04-17 06:56:43.860551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.860709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.860751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.569 qpair failed and we were unable to recover it. 00:30:39.569 [2024-04-17 06:56:43.860894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.861035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.861061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.569 qpair failed and we were unable to recover it. 00:30:39.569 [2024-04-17 06:56:43.861217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.861361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.861387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.569 qpair failed and we were unable to recover it. 00:30:39.569 [2024-04-17 06:56:43.861600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.861738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.861767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.569 qpair failed and we were unable to recover it. 00:30:39.569 [2024-04-17 06:56:43.861908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.862055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.862081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.569 qpair failed and we were unable to recover it. 00:30:39.569 [2024-04-17 06:56:43.862220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.862353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.862379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.569 qpair failed and we were unable to recover it. 00:30:39.569 [2024-04-17 06:56:43.862560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.862711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.862753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.569 qpair failed and we were unable to recover it. 00:30:39.569 [2024-04-17 06:56:43.862893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.863064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.863092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.569 qpair failed and we were unable to recover it. 00:30:39.569 [2024-04-17 06:56:43.863247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.863442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.863468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.569 qpair failed and we were unable to recover it. 00:30:39.569 [2024-04-17 06:56:43.863651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.863816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.863844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.569 qpair failed and we were unable to recover it. 00:30:39.569 [2024-04-17 06:56:43.864005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.864202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.864230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.569 qpair failed and we were unable to recover it. 00:30:39.569 [2024-04-17 06:56:43.864376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.864548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.864575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.569 qpair failed and we were unable to recover it. 00:30:39.569 [2024-04-17 06:56:43.864728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.864887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.864928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.569 qpair failed and we were unable to recover it. 00:30:39.569 [2024-04-17 06:56:43.865070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.865227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.865255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.569 qpair failed and we were unable to recover it. 00:30:39.569 [2024-04-17 06:56:43.865417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.865559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.865585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.569 qpair failed and we were unable to recover it. 00:30:39.569 [2024-04-17 06:56:43.865768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.865936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.865961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.569 qpair failed and we were unable to recover it. 00:30:39.569 [2024-04-17 06:56:43.866126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.866283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.866308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.569 qpair failed and we were unable to recover it. 00:30:39.569 [2024-04-17 06:56:43.866439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.866594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.866620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.569 qpair failed and we were unable to recover it. 00:30:39.569 [2024-04-17 06:56:43.866805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.866990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.867015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.569 qpair failed and we were unable to recover it. 00:30:39.569 [2024-04-17 06:56:43.867206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.867339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.867366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.569 qpair failed and we were unable to recover it. 00:30:39.569 [2024-04-17 06:56:43.867541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.867682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.867711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.569 qpair failed and we were unable to recover it. 00:30:39.569 [2024-04-17 06:56:43.867916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.868064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.868093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.569 qpair failed and we were unable to recover it. 00:30:39.569 [2024-04-17 06:56:43.868273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.868416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.868442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.569 qpair failed and we were unable to recover it. 00:30:39.569 [2024-04-17 06:56:43.868619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.868753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.868778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.569 qpair failed and we were unable to recover it. 00:30:39.569 [2024-04-17 06:56:43.868958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.869128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.869155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.569 qpair failed and we were unable to recover it. 00:30:39.569 [2024-04-17 06:56:43.869310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.869448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.869475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.569 qpair failed and we were unable to recover it. 00:30:39.569 [2024-04-17 06:56:43.869618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.869780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.869807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.569 qpair failed and we were unable to recover it. 00:30:39.569 [2024-04-17 06:56:43.869951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.870126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.569 [2024-04-17 06:56:43.870150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.569 qpair failed and we were unable to recover it. 00:30:39.569 [2024-04-17 06:56:43.870324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.870463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.870491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.570 qpair failed and we were unable to recover it. 00:30:39.570 [2024-04-17 06:56:43.870667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.870819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.870843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.570 qpair failed and we were unable to recover it. 00:30:39.570 [2024-04-17 06:56:43.870979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.871172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.871205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.570 qpair failed and we were unable to recover it. 00:30:39.570 [2024-04-17 06:56:43.871371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.871523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.871547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.570 qpair failed and we were unable to recover it. 00:30:39.570 [2024-04-17 06:56:43.871707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.871883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.871910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.570 qpair failed and we were unable to recover it. 00:30:39.570 [2024-04-17 06:56:43.872056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.872210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.872236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.570 qpair failed and we were unable to recover it. 00:30:39.570 [2024-04-17 06:56:43.872397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.872568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.872595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.570 qpair failed and we were unable to recover it. 00:30:39.570 [2024-04-17 06:56:43.872762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.872907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.872935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.570 qpair failed and we were unable to recover it. 00:30:39.570 [2024-04-17 06:56:43.873088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.873216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.873258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.570 qpair failed and we were unable to recover it. 00:30:39.570 [2024-04-17 06:56:43.873435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.873606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.873635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.570 qpair failed and we were unable to recover it. 00:30:39.570 [2024-04-17 06:56:43.873802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.873980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.874004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.570 qpair failed and we were unable to recover it. 00:30:39.570 [2024-04-17 06:56:43.874135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.874272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.874298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.570 qpair failed and we were unable to recover it. 00:30:39.570 [2024-04-17 06:56:43.874486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.874676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.874719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.570 qpair failed and we were unable to recover it. 00:30:39.570 [2024-04-17 06:56:43.874885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.875027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.875053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.570 qpair failed and we were unable to recover it. 00:30:39.570 [2024-04-17 06:56:43.875208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.875362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.875402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.570 qpair failed and we were unable to recover it. 00:30:39.570 [2024-04-17 06:56:43.875585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.875752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.875778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.570 qpair failed and we were unable to recover it. 00:30:39.570 [2024-04-17 06:56:43.875916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.876052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.876078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.570 qpair failed and we were unable to recover it. 00:30:39.570 [2024-04-17 06:56:43.876232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.876369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.876394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.570 qpair failed and we were unable to recover it. 00:30:39.570 [2024-04-17 06:56:43.876558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.876691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.876716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.570 qpair failed and we were unable to recover it. 00:30:39.570 [2024-04-17 06:56:43.876877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.877076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.877103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.570 qpair failed and we were unable to recover it. 00:30:39.570 [2024-04-17 06:56:43.877272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.877403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.877427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.570 qpair failed and we were unable to recover it. 00:30:39.570 [2024-04-17 06:56:43.877586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.877756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.877782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.570 qpair failed and we were unable to recover it. 00:30:39.570 [2024-04-17 06:56:43.877965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.878186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.878214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.570 qpair failed and we were unable to recover it. 00:30:39.570 [2024-04-17 06:56:43.878363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.878541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.878584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.570 qpair failed and we were unable to recover it. 00:30:39.570 [2024-04-17 06:56:43.878781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.878995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.879022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.570 qpair failed and we were unable to recover it. 00:30:39.570 [2024-04-17 06:56:43.879199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.879359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.879383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.570 qpair failed and we were unable to recover it. 00:30:39.570 [2024-04-17 06:56:43.879536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.879710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.879738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.570 qpair failed and we were unable to recover it. 00:30:39.570 [2024-04-17 06:56:43.879919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.880095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.880118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.570 qpair failed and we were unable to recover it. 00:30:39.570 [2024-04-17 06:56:43.880261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.880443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.570 [2024-04-17 06:56:43.880469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.570 qpair failed and we were unable to recover it. 00:30:39.571 [2024-04-17 06:56:43.880630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.880754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.880778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.571 qpair failed and we were unable to recover it. 00:30:39.571 [2024-04-17 06:56:43.880965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.881100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.881126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.571 qpair failed and we were unable to recover it. 00:30:39.571 [2024-04-17 06:56:43.881267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.881429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.881455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.571 qpair failed and we were unable to recover it. 00:30:39.571 [2024-04-17 06:56:43.881604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.881756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.881780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.571 qpair failed and we were unable to recover it. 00:30:39.571 [2024-04-17 06:56:43.881967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.882168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.882197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.571 qpair failed and we were unable to recover it. 00:30:39.571 [2024-04-17 06:56:43.882349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.882555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.882585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.571 qpair failed and we were unable to recover it. 00:30:39.571 [2024-04-17 06:56:43.882784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.882913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.882937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.571 qpair failed and we were unable to recover it. 00:30:39.571 [2024-04-17 06:56:43.883093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.883253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.883282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.571 qpair failed and we were unable to recover it. 00:30:39.571 [2024-04-17 06:56:43.883440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.883612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.883636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.571 qpair failed and we were unable to recover it. 00:30:39.571 [2024-04-17 06:56:43.883789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.883969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.884010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.571 qpair failed and we were unable to recover it. 00:30:39.571 [2024-04-17 06:56:43.884187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.884327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.884354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.571 qpair failed and we were unable to recover it. 00:30:39.571 [2024-04-17 06:56:43.884493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.884651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.884676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.571 qpair failed and we were unable to recover it. 00:30:39.571 [2024-04-17 06:56:43.884814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.884939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.884963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.571 qpair failed and we were unable to recover it. 00:30:39.571 [2024-04-17 06:56:43.885149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.885332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.885360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.571 qpair failed and we were unable to recover it. 00:30:39.571 [2024-04-17 06:56:43.885527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.885710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.885735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.571 qpair failed and we were unable to recover it. 00:30:39.571 [2024-04-17 06:56:43.885889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.886022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.886046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.571 qpair failed and we were unable to recover it. 00:30:39.571 [2024-04-17 06:56:43.886206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.886387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.886414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.571 qpair failed and we were unable to recover it. 00:30:39.571 [2024-04-17 06:56:43.886600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.886758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.886784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.571 qpair failed and we were unable to recover it. 00:30:39.571 [2024-04-17 06:56:43.886974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.887105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.887147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.571 qpair failed and we were unable to recover it. 00:30:39.571 [2024-04-17 06:56:43.887343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.887507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.887536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.571 qpair failed and we were unable to recover it. 00:30:39.571 [2024-04-17 06:56:43.887687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.887821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.887848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.571 qpair failed and we were unable to recover it. 00:30:39.571 [2024-04-17 06:56:43.888020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.888147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.888172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.571 qpair failed and we were unable to recover it. 00:30:39.571 [2024-04-17 06:56:43.888328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.888507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.571 [2024-04-17 06:56:43.888534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.571 qpair failed and we were unable to recover it. 00:30:39.572 [2024-04-17 06:56:43.888672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.888845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.888872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.572 qpair failed and we were unable to recover it. 00:30:39.572 [2024-04-17 06:56:43.889050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.889181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.889207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.572 qpair failed and we were unable to recover it. 00:30:39.572 [2024-04-17 06:56:43.889396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.889546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.889573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.572 qpair failed and we were unable to recover it. 00:30:39.572 [2024-04-17 06:56:43.889720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.889872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.889901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.572 qpair failed and we were unable to recover it. 00:30:39.572 [2024-04-17 06:56:43.890085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.890264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.890292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.572 qpair failed and we were unable to recover it. 00:30:39.572 [2024-04-17 06:56:43.890482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.890609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.890635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.572 qpair failed and we were unable to recover it. 00:30:39.572 [2024-04-17 06:56:43.890837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.890963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.890988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.572 qpair failed and we were unable to recover it. 00:30:39.572 [2024-04-17 06:56:43.891121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.891285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.891310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.572 qpair failed and we were unable to recover it. 00:30:39.572 [2024-04-17 06:56:43.891492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.891694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.891722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.572 qpair failed and we were unable to recover it. 00:30:39.572 [2024-04-17 06:56:43.891865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.892021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.892047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.572 qpair failed and we were unable to recover it. 00:30:39.572 [2024-04-17 06:56:43.892204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.892337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.892361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.572 qpair failed and we were unable to recover it. 00:30:39.572 [2024-04-17 06:56:43.892515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.892688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.892715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.572 qpair failed and we were unable to recover it. 00:30:39.572 [2024-04-17 06:56:43.892860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.893005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.893034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.572 qpair failed and we were unable to recover it. 00:30:39.572 [2024-04-17 06:56:43.893187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.893315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.893340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.572 qpair failed and we were unable to recover it. 00:30:39.572 [2024-04-17 06:56:43.893501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.893640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.893669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.572 qpair failed and we were unable to recover it. 00:30:39.572 [2024-04-17 06:56:43.893811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.893983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.894012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.572 qpair failed and we were unable to recover it. 00:30:39.572 [2024-04-17 06:56:43.894193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.894322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.894346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.572 qpair failed and we were unable to recover it. 00:30:39.572 [2024-04-17 06:56:43.894526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.894709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.894734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.572 qpair failed and we were unable to recover it. 00:30:39.572 [2024-04-17 06:56:43.894860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.895005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.895035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.572 qpair failed and we were unable to recover it. 00:30:39.572 [2024-04-17 06:56:43.895189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.895318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.895343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.572 qpair failed and we were unable to recover it. 00:30:39.572 [2024-04-17 06:56:43.895535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.895683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.895710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.572 qpair failed and we were unable to recover it. 00:30:39.572 [2024-04-17 06:56:43.895851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.895995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.896022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.572 qpair failed and we were unable to recover it. 00:30:39.572 [2024-04-17 06:56:43.896184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.896317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.896347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.572 qpair failed and we were unable to recover it. 00:30:39.572 [2024-04-17 06:56:43.896542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.896745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.896769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.572 qpair failed and we were unable to recover it. 00:30:39.572 [2024-04-17 06:56:43.896895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.897075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.897100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.572 qpair failed and we were unable to recover it. 00:30:39.572 [2024-04-17 06:56:43.897246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.897368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.897391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.572 qpair failed and we were unable to recover it. 00:30:39.572 [2024-04-17 06:56:43.897582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.897707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.897732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.572 qpair failed and we were unable to recover it. 00:30:39.572 [2024-04-17 06:56:43.897881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.898029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.898053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.572 qpair failed and we were unable to recover it. 00:30:39.572 [2024-04-17 06:56:43.898219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.572 [2024-04-17 06:56:43.898358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.898382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.573 qpair failed and we were unable to recover it. 00:30:39.573 [2024-04-17 06:56:43.898526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.898656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.898682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.573 qpair failed and we were unable to recover it. 00:30:39.573 [2024-04-17 06:56:43.898856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.899033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.899057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.573 qpair failed and we were unable to recover it. 00:30:39.573 [2024-04-17 06:56:43.899185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.899312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.899336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.573 qpair failed and we were unable to recover it. 00:30:39.573 [2024-04-17 06:56:43.899498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.899619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.899648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.573 qpair failed and we were unable to recover it. 00:30:39.573 [2024-04-17 06:56:43.899792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.899932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.899958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.573 qpair failed and we were unable to recover it. 00:30:39.573 [2024-04-17 06:56:43.900111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.900251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.900276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.573 qpair failed and we were unable to recover it. 00:30:39.573 [2024-04-17 06:56:43.900488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.900667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.900691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.573 qpair failed and we were unable to recover it. 00:30:39.573 [2024-04-17 06:56:43.900818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.900935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.900959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.573 qpair failed and we were unable to recover it. 00:30:39.573 [2024-04-17 06:56:43.901107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.901228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.901252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.573 qpair failed and we were unable to recover it. 00:30:39.573 [2024-04-17 06:56:43.901468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.901627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.901654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.573 qpair failed and we were unable to recover it. 00:30:39.573 [2024-04-17 06:56:43.901801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.901947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.901971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.573 qpair failed and we were unable to recover it. 00:30:39.573 [2024-04-17 06:56:43.902098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.902253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.902279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.573 qpair failed and we were unable to recover it. 00:30:39.573 [2024-04-17 06:56:43.902415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.902572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.902596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.573 qpair failed and we were unable to recover it. 00:30:39.573 [2024-04-17 06:56:43.902755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.902907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.902951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.573 qpair failed and we were unable to recover it. 00:30:39.573 [2024-04-17 06:56:43.903143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.903276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.903302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.573 qpair failed and we were unable to recover it. 00:30:39.573 [2024-04-17 06:56:43.903429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.903589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.903614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.573 qpair failed and we were unable to recover it. 00:30:39.573 [2024-04-17 06:56:43.903765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.903901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.903930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.573 qpair failed and we were unable to recover it. 00:30:39.573 [2024-04-17 06:56:43.904090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.904226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.904252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.573 qpair failed and we were unable to recover it. 00:30:39.573 [2024-04-17 06:56:43.904395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.904572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.904598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.573 qpair failed and we were unable to recover it. 00:30:39.573 [2024-04-17 06:56:43.904769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.904934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.904961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.573 qpair failed and we were unable to recover it. 00:30:39.573 [2024-04-17 06:56:43.905125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.905257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.905283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.573 qpair failed and we were unable to recover it. 00:30:39.573 [2024-04-17 06:56:43.905413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.905553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.905583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.573 qpair failed and we were unable to recover it. 00:30:39.573 [2024-04-17 06:56:43.905762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.905898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.905925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.573 qpair failed and we were unable to recover it. 00:30:39.573 [2024-04-17 06:56:43.906062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.906247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.906277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.573 qpair failed and we were unable to recover it. 00:30:39.573 [2024-04-17 06:56:43.906456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.906640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.906664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.573 qpair failed and we were unable to recover it. 00:30:39.573 [2024-04-17 06:56:43.906799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.906981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.907007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.573 qpair failed and we were unable to recover it. 00:30:39.573 [2024-04-17 06:56:43.907157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.907297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.907322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.573 qpair failed and we were unable to recover it. 00:30:39.573 [2024-04-17 06:56:43.907505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.907695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.573 [2024-04-17 06:56:43.907722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.573 qpair failed and we were unable to recover it. 00:30:39.574 [2024-04-17 06:56:43.907897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.908045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.908069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.574 qpair failed and we were unable to recover it. 00:30:39.574 [2024-04-17 06:56:43.908226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.908374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.908397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.574 qpair failed and we were unable to recover it. 00:30:39.574 [2024-04-17 06:56:43.908585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.908736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.908765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.574 qpair failed and we were unable to recover it. 00:30:39.574 [2024-04-17 06:56:43.908912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.909110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.909137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.574 qpair failed and we were unable to recover it. 00:30:39.574 [2024-04-17 06:56:43.909328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.909448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.909486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.574 qpair failed and we were unable to recover it. 00:30:39.574 [2024-04-17 06:56:43.909646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.909828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.909857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.574 qpair failed and we were unable to recover it. 00:30:39.574 [2024-04-17 06:56:43.910014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.910159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.910194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.574 qpair failed and we were unable to recover it. 00:30:39.574 [2024-04-17 06:56:43.910366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.910487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.910512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.574 qpair failed and we were unable to recover it. 00:30:39.574 [2024-04-17 06:56:43.910684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.910843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.910870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.574 qpair failed and we were unable to recover it. 00:30:39.574 [2024-04-17 06:56:43.911037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.911214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.911242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.574 qpair failed and we were unable to recover it. 00:30:39.574 [2024-04-17 06:56:43.911392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.911554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.911593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.574 qpair failed and we were unable to recover it. 00:30:39.574 [2024-04-17 06:56:43.911790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.911981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.912007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.574 qpair failed and we were unable to recover it. 00:30:39.574 [2024-04-17 06:56:43.912168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.912328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.912355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.574 qpair failed and we were unable to recover it. 00:30:39.574 [2024-04-17 06:56:43.912516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.912648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.912672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.574 qpair failed and we were unable to recover it. 00:30:39.574 [2024-04-17 06:56:43.912850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.913011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.913037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.574 qpair failed and we were unable to recover it. 00:30:39.574 [2024-04-17 06:56:43.913188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.913328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.913353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.574 qpair failed and we were unable to recover it. 00:30:39.574 [2024-04-17 06:56:43.913509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.913639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.913664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.574 qpair failed and we were unable to recover it. 00:30:39.574 [2024-04-17 06:56:43.913835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.913994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.914020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.574 qpair failed and we were unable to recover it. 00:30:39.574 [2024-04-17 06:56:43.914208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.914347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.914373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.574 qpair failed and we were unable to recover it. 00:30:39.574 [2024-04-17 06:56:43.914546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.914702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.914744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.574 qpair failed and we were unable to recover it. 00:30:39.574 [2024-04-17 06:56:43.914910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.915102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.915127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.574 qpair failed and we were unable to recover it. 00:30:39.574 [2024-04-17 06:56:43.915317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.915479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.915506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.574 qpair failed and we were unable to recover it. 00:30:39.574 [2024-04-17 06:56:43.915664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.915822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.915847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.574 qpair failed and we were unable to recover it. 00:30:39.574 [2024-04-17 06:56:43.916003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.916139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.916165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.574 qpair failed and we were unable to recover it. 00:30:39.574 [2024-04-17 06:56:43.916356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.916512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.916535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.574 qpair failed and we were unable to recover it. 00:30:39.574 [2024-04-17 06:56:43.916670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.916849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.916873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.574 qpair failed and we were unable to recover it. 00:30:39.574 [2024-04-17 06:56:43.917054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.917195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.917223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.574 qpair failed and we were unable to recover it. 00:30:39.574 [2024-04-17 06:56:43.917385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.917513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.917537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.574 qpair failed and we were unable to recover it. 00:30:39.574 [2024-04-17 06:56:43.917669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.574 [2024-04-17 06:56:43.917832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.917856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.575 qpair failed and we were unable to recover it. 00:30:39.575 [2024-04-17 06:56:43.917981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.918185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.918210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.575 qpair failed and we were unable to recover it. 00:30:39.575 [2024-04-17 06:56:43.918339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.918493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.918521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.575 qpair failed and we were unable to recover it. 00:30:39.575 [2024-04-17 06:56:43.918702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.918859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.918883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.575 qpair failed and we were unable to recover it. 00:30:39.575 [2024-04-17 06:56:43.919027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.919186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.919229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.575 qpair failed and we were unable to recover it. 00:30:39.575 [2024-04-17 06:56:43.919365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.919529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.919556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.575 qpair failed and we were unable to recover it. 00:30:39.575 [2024-04-17 06:56:43.919742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.919924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.919949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.575 qpair failed and we were unable to recover it. 00:30:39.575 [2024-04-17 06:56:43.920076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.920228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.920257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.575 qpair failed and we were unable to recover it. 00:30:39.575 [2024-04-17 06:56:43.920422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.920551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.920575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.575 qpair failed and we were unable to recover it. 00:30:39.575 [2024-04-17 06:56:43.920708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.920827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.920849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.575 qpair failed and we were unable to recover it. 00:30:39.575 [2024-04-17 06:56:43.921029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.921161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.921211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.575 qpair failed and we were unable to recover it. 00:30:39.575 [2024-04-17 06:56:43.921379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.921555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.921583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.575 qpair failed and we were unable to recover it. 00:30:39.575 [2024-04-17 06:56:43.921732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.921915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.921938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.575 qpair failed and we were unable to recover it. 00:30:39.575 [2024-04-17 06:56:43.922095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.922307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.922334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.575 qpair failed and we were unable to recover it. 00:30:39.575 [2024-04-17 06:56:43.922491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.922648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.922673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.575 qpair failed and we were unable to recover it. 00:30:39.575 [2024-04-17 06:56:43.922857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.922983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.923024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.575 qpair failed and we were unable to recover it. 00:30:39.575 [2024-04-17 06:56:43.923237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.923387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.923410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.575 qpair failed and we were unable to recover it. 00:30:39.575 [2024-04-17 06:56:43.923600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.923720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.923744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.575 qpair failed and we were unable to recover it. 00:30:39.575 [2024-04-17 06:56:43.923881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.924035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.924059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.575 qpair failed and we were unable to recover it. 00:30:39.575 [2024-04-17 06:56:43.924195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.924357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.924381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.575 qpair failed and we were unable to recover it. 00:30:39.575 [2024-04-17 06:56:43.924529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.924692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.924720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.575 qpair failed and we were unable to recover it. 00:30:39.575 [2024-04-17 06:56:43.924876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.925033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.925057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.575 qpair failed and we were unable to recover it. 00:30:39.575 [2024-04-17 06:56:43.925199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.925398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.925423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.575 qpair failed and we were unable to recover it. 00:30:39.575 [2024-04-17 06:56:43.925595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.925774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.925800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.575 qpair failed and we were unable to recover it. 00:30:39.575 [2024-04-17 06:56:43.925954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.926105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.926129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.575 qpair failed and we were unable to recover it. 00:30:39.575 [2024-04-17 06:56:43.926299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.926493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.926519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.575 qpair failed and we were unable to recover it. 00:30:39.575 [2024-04-17 06:56:43.926717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.926887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.926914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.575 qpair failed and we were unable to recover it. 00:30:39.575 [2024-04-17 06:56:43.927068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.927218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.927241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.575 qpair failed and we were unable to recover it. 00:30:39.575 [2024-04-17 06:56:43.927389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.927550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.575 [2024-04-17 06:56:43.927576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.575 qpair failed and we were unable to recover it. 00:30:39.575 [2024-04-17 06:56:43.927741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.927888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.927915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.576 qpair failed and we were unable to recover it. 00:30:39.576 [2024-04-17 06:56:43.928058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.928217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.928241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.576 qpair failed and we were unable to recover it. 00:30:39.576 [2024-04-17 06:56:43.928418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.928549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.928576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.576 qpair failed and we were unable to recover it. 00:30:39.576 [2024-04-17 06:56:43.928743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.928911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.928938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.576 qpair failed and we were unable to recover it. 00:30:39.576 [2024-04-17 06:56:43.929115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.929265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.929308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.576 qpair failed and we were unable to recover it. 00:30:39.576 [2024-04-17 06:56:43.929457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.929633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.929657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.576 qpair failed and we were unable to recover it. 00:30:39.576 [2024-04-17 06:56:43.929842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.929999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.930025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.576 qpair failed and we were unable to recover it. 00:30:39.576 [2024-04-17 06:56:43.930172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.930304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.930329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.576 qpair failed and we were unable to recover it. 00:30:39.576 [2024-04-17 06:56:43.930502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.930674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.930702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.576 qpair failed and we were unable to recover it. 00:30:39.576 [2024-04-17 06:56:43.930846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.931044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.931072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.576 qpair failed and we were unable to recover it. 00:30:39.576 [2024-04-17 06:56:43.931224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.931381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.931404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.576 qpair failed and we were unable to recover it. 00:30:39.576 [2024-04-17 06:56:43.931532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.931691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.931716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.576 qpair failed and we were unable to recover it. 00:30:39.576 [2024-04-17 06:56:43.931860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.932025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.932050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.576 qpair failed and we were unable to recover it. 00:30:39.576 [2024-04-17 06:56:43.932259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.932415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.932441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.576 qpair failed and we were unable to recover it. 00:30:39.576 [2024-04-17 06:56:43.932612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.932771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.932797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.576 qpair failed and we were unable to recover it. 00:30:39.576 [2024-04-17 06:56:43.932956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.933149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.933183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.576 qpair failed and we were unable to recover it. 00:30:39.576 [2024-04-17 06:56:43.933373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.933536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.933562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.576 qpair failed and we were unable to recover it. 00:30:39.576 [2024-04-17 06:56:43.933721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.933869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.933897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.576 qpair failed and we were unable to recover it. 00:30:39.576 [2024-04-17 06:56:43.934034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.934217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.934245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.576 qpair failed and we were unable to recover it. 00:30:39.576 [2024-04-17 06:56:43.934413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.934603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.934630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.576 qpair failed and we were unable to recover it. 00:30:39.576 [2024-04-17 06:56:43.934796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.934962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.934989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.576 qpair failed and we were unable to recover it. 00:30:39.576 [2024-04-17 06:56:43.935126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.935307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.935332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.576 qpair failed and we were unable to recover it. 00:30:39.576 [2024-04-17 06:56:43.935511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.935683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.935711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.576 qpair failed and we were unable to recover it. 00:30:39.576 [2024-04-17 06:56:43.935913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.936105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.936128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.576 qpair failed and we were unable to recover it. 00:30:39.576 [2024-04-17 06:56:43.936256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.936437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.576 [2024-04-17 06:56:43.936478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.576 qpair failed and we were unable to recover it. 00:30:39.577 [2024-04-17 06:56:43.936631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.936786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.936810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.577 qpair failed and we were unable to recover it. 00:30:39.577 [2024-04-17 06:56:43.936982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.937140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.937166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.577 qpair failed and we were unable to recover it. 00:30:39.577 [2024-04-17 06:56:43.937350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.937497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.937524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.577 qpair failed and we were unable to recover it. 00:30:39.577 [2024-04-17 06:56:43.937683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.937814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.937838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.577 qpair failed and we were unable to recover it. 00:30:39.577 [2024-04-17 06:56:43.937998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.938155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.938189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.577 qpair failed and we were unable to recover it. 00:30:39.577 [2024-04-17 06:56:43.938339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.938508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.938535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.577 qpair failed and we were unable to recover it. 00:30:39.577 [2024-04-17 06:56:43.938704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.938861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.938884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.577 qpair failed and we were unable to recover it. 00:30:39.577 [2024-04-17 06:56:43.939060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.939256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.939281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.577 qpair failed and we were unable to recover it. 00:30:39.577 [2024-04-17 06:56:43.939415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.939554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.939579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.577 qpair failed and we were unable to recover it. 00:30:39.577 [2024-04-17 06:56:43.939714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.939836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.939860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.577 qpair failed and we were unable to recover it. 00:30:39.577 [2024-04-17 06:56:43.940014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.940137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.940161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.577 qpair failed and we were unable to recover it. 00:30:39.577 [2024-04-17 06:56:43.940334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.940468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.940493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.577 qpair failed and we were unable to recover it. 00:30:39.577 [2024-04-17 06:56:43.940647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.940777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.940801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.577 qpair failed and we were unable to recover it. 00:30:39.577 [2024-04-17 06:56:43.940932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.941088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.941112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.577 qpair failed and we were unable to recover it. 00:30:39.577 [2024-04-17 06:56:43.941253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.941414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.941437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.577 qpair failed and we were unable to recover it. 00:30:39.577 [2024-04-17 06:56:43.941641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.941783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.941809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.577 qpair failed and we were unable to recover it. 00:30:39.577 [2024-04-17 06:56:43.941991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.942122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.942147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.577 qpair failed and we were unable to recover it. 00:30:39.577 [2024-04-17 06:56:43.942353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.942489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.942514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.577 qpair failed and we were unable to recover it. 00:30:39.577 [2024-04-17 06:56:43.942698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.942829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.942854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.577 qpair failed and we were unable to recover it. 00:30:39.577 [2024-04-17 06:56:43.943010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.943227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.943251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.577 qpair failed and we were unable to recover it. 00:30:39.577 [2024-04-17 06:56:43.943436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.943624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.943647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.577 qpair failed and we were unable to recover it. 00:30:39.577 [2024-04-17 06:56:43.943802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.943923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.943962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.577 qpair failed and we were unable to recover it. 00:30:39.577 [2024-04-17 06:56:43.944135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.944308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.944334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.577 qpair failed and we were unable to recover it. 00:30:39.577 [2024-04-17 06:56:43.944513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.944675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.944702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.577 qpair failed and we were unable to recover it. 00:30:39.577 [2024-04-17 06:56:43.944881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.945044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.945083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.577 qpair failed and we were unable to recover it. 00:30:39.577 [2024-04-17 06:56:43.945289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.945500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.945524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.577 qpair failed and we were unable to recover it. 00:30:39.577 [2024-04-17 06:56:43.945696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.945843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.945870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.577 qpair failed and we were unable to recover it. 00:30:39.577 [2024-04-17 06:56:43.946022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.946180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.946207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.577 qpair failed and we were unable to recover it. 00:30:39.577 [2024-04-17 06:56:43.946366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.577 [2024-04-17 06:56:43.946573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.946597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.578 qpair failed and we were unable to recover it. 00:30:39.578 [2024-04-17 06:56:43.946752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.946945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.946971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.578 qpair failed and we were unable to recover it. 00:30:39.578 [2024-04-17 06:56:43.947119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.947240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.947265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.578 qpair failed and we were unable to recover it. 00:30:39.578 [2024-04-17 06:56:43.947414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.947539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.947563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.578 qpair failed and we were unable to recover it. 00:30:39.578 [2024-04-17 06:56:43.947713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.947893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.947921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.578 qpair failed and we were unable to recover it. 00:30:39.578 [2024-04-17 06:56:43.948110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.948243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.948273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.578 qpair failed and we were unable to recover it. 00:30:39.578 [2024-04-17 06:56:43.948430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.948554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.948582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.578 qpair failed and we were unable to recover it. 00:30:39.578 [2024-04-17 06:56:43.948708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.948885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.948914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.578 qpair failed and we were unable to recover it. 00:30:39.578 [2024-04-17 06:56:43.949089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.949225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.949252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.578 qpair failed and we were unable to recover it. 00:30:39.578 [2024-04-17 06:56:43.949405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.949596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.949623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.578 qpair failed and we were unable to recover it. 00:30:39.578 [2024-04-17 06:56:43.949787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.949969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.949994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.578 qpair failed and we were unable to recover it. 00:30:39.578 [2024-04-17 06:56:43.950123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.950283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.950309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.578 qpair failed and we were unable to recover it. 00:30:39.578 [2024-04-17 06:56:43.950432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.950584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.950612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.578 qpair failed and we were unable to recover it. 00:30:39.578 [2024-04-17 06:56:43.950785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.950963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.950988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.578 qpair failed and we were unable to recover it. 00:30:39.578 [2024-04-17 06:56:43.951110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.951264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.951289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.578 qpair failed and we were unable to recover it. 00:30:39.578 [2024-04-17 06:56:43.951416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.951608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.951633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.578 qpair failed and we were unable to recover it. 00:30:39.578 [2024-04-17 06:56:43.951814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.951939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.951969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.578 qpair failed and we were unable to recover it. 00:30:39.578 [2024-04-17 06:56:43.952145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.952276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.952301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.578 qpair failed and we were unable to recover it. 00:30:39.578 [2024-04-17 06:56:43.952507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.952652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.952678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.578 qpair failed and we were unable to recover it. 00:30:39.578 [2024-04-17 06:56:43.952860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.953000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.953027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.578 qpair failed and we were unable to recover it. 00:30:39.578 [2024-04-17 06:56:43.953186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.953347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.953373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.578 qpair failed and we were unable to recover it. 00:30:39.578 [2024-04-17 06:56:43.953527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.953703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.953732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.578 qpair failed and we were unable to recover it. 00:30:39.578 [2024-04-17 06:56:43.953940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.954086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.954110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.578 qpair failed and we were unable to recover it. 00:30:39.578 [2024-04-17 06:56:43.954235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.954388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.954431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.578 qpair failed and we were unable to recover it. 00:30:39.578 [2024-04-17 06:56:43.954602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.954777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.954804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.578 qpair failed and we were unable to recover it. 00:30:39.578 [2024-04-17 06:56:43.954943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.955115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.955143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.578 qpair failed and we were unable to recover it. 00:30:39.578 [2024-04-17 06:56:43.955288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.955414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.955444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.578 qpair failed and we were unable to recover it. 00:30:39.578 [2024-04-17 06:56:43.955636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.955822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.955846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.578 qpair failed and we were unable to recover it. 00:30:39.578 [2024-04-17 06:56:43.955993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.956145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.578 [2024-04-17 06:56:43.956172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.578 qpair failed and we were unable to recover it. 00:30:39.578 [2024-04-17 06:56:43.956355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.956486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.956528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.579 qpair failed and we were unable to recover it. 00:30:39.579 [2024-04-17 06:56:43.956703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.956885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.956915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.579 qpair failed and we were unable to recover it. 00:30:39.579 [2024-04-17 06:56:43.957129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.957279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.957306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.579 qpair failed and we were unable to recover it. 00:30:39.579 [2024-04-17 06:56:43.957490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.957661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.957687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.579 qpair failed and we were unable to recover it. 00:30:39.579 [2024-04-17 06:56:43.957825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.957963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.957991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.579 qpair failed and we were unable to recover it. 00:30:39.579 [2024-04-17 06:56:43.958160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.958353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.958380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.579 qpair failed and we were unable to recover it. 00:30:39.579 [2024-04-17 06:56:43.958568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.958688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.958728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.579 qpair failed and we were unable to recover it. 00:30:39.579 [2024-04-17 06:56:43.958868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.959064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.959096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.579 qpair failed and we were unable to recover it. 00:30:39.579 [2024-04-17 06:56:43.959258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.959458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.959486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.579 qpair failed and we were unable to recover it. 00:30:39.579 [2024-04-17 06:56:43.959660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.959831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.959861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.579 qpair failed and we were unable to recover it. 00:30:39.579 [2024-04-17 06:56:43.960063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.960233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.960260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.579 qpair failed and we were unable to recover it. 00:30:39.579 [2024-04-17 06:56:43.960404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.960535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.960562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.579 qpair failed and we were unable to recover it. 00:30:39.579 [2024-04-17 06:56:43.960705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.960826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.960851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.579 qpair failed and we were unable to recover it. 00:30:39.579 [2024-04-17 06:56:43.961033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.961217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.961242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.579 qpair failed and we were unable to recover it. 00:30:39.579 [2024-04-17 06:56:43.961366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.961532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.961558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.579 qpair failed and we were unable to recover it. 00:30:39.579 [2024-04-17 06:56:43.961711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.961847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.961872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.579 qpair failed and we were unable to recover it. 00:30:39.579 [2024-04-17 06:56:43.962055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.962228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.962256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.579 qpair failed and we were unable to recover it. 00:30:39.579 [2024-04-17 06:56:43.962420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.962566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.962592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.579 qpair failed and we were unable to recover it. 00:30:39.579 [2024-04-17 06:56:43.962797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.962922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.962962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.579 qpair failed and we were unable to recover it. 00:30:39.579 [2024-04-17 06:56:43.963118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.963263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.963290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.579 qpair failed and we were unable to recover it. 00:30:39.579 [2024-04-17 06:56:43.963485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.963651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.963678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.579 qpair failed and we were unable to recover it. 00:30:39.579 [2024-04-17 06:56:43.963830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.963956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.963980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.579 qpair failed and we were unable to recover it. 00:30:39.579 [2024-04-17 06:56:43.964167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.964349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.964375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.579 qpair failed and we were unable to recover it. 00:30:39.579 [2024-04-17 06:56:43.964514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.964650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.964676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.579 qpair failed and we were unable to recover it. 00:30:39.579 [2024-04-17 06:56:43.964820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.965005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.965046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.579 qpair failed and we were unable to recover it. 00:30:39.579 [2024-04-17 06:56:43.965216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.965381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.965407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.579 qpair failed and we were unable to recover it. 00:30:39.579 [2024-04-17 06:56:43.965576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.965750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.965777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.579 qpair failed and we were unable to recover it. 00:30:39.579 [2024-04-17 06:56:43.965952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.966153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.966196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.579 qpair failed and we were unable to recover it. 00:30:39.579 [2024-04-17 06:56:43.966337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.966486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.579 [2024-04-17 06:56:43.966513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.579 qpair failed and we were unable to recover it. 00:30:39.580 [2024-04-17 06:56:43.966701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.966871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.966897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.580 qpair failed and we were unable to recover it. 00:30:39.580 [2024-04-17 06:56:43.967075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.967230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.967256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.580 qpair failed and we were unable to recover it. 00:30:39.580 [2024-04-17 06:56:43.967447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.967599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.967624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.580 qpair failed and we were unable to recover it. 00:30:39.580 [2024-04-17 06:56:43.967774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.967910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.967936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.580 qpair failed and we were unable to recover it. 00:30:39.580 [2024-04-17 06:56:43.968092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.968231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.968259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.580 qpair failed and we were unable to recover it. 00:30:39.580 [2024-04-17 06:56:43.968439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.968623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.968647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.580 qpair failed and we were unable to recover it. 00:30:39.580 [2024-04-17 06:56:43.968821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.968998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.969026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.580 qpair failed and we were unable to recover it. 00:30:39.580 [2024-04-17 06:56:43.969185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.969319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.969344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.580 qpair failed and we were unable to recover it. 00:30:39.580 [2024-04-17 06:56:43.969471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.969594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.969618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.580 qpair failed and we were unable to recover it. 00:30:39.580 [2024-04-17 06:56:43.969759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.969898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.969927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.580 qpair failed and we were unable to recover it. 00:30:39.580 [2024-04-17 06:56:43.970083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.970206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.970231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.580 qpair failed and we were unable to recover it. 00:30:39.580 [2024-04-17 06:56:43.970379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.970535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.970558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.580 qpair failed and we were unable to recover it. 00:30:39.580 [2024-04-17 06:56:43.970756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.970888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.970911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.580 qpair failed and we were unable to recover it. 00:30:39.580 [2024-04-17 06:56:43.971065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.971225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.971266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.580 qpair failed and we were unable to recover it. 00:30:39.580 [2024-04-17 06:56:43.971440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.971605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.971631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.580 qpair failed and we were unable to recover it. 00:30:39.580 [2024-04-17 06:56:43.971800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.972007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.972031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.580 qpair failed and we were unable to recover it. 00:30:39.580 [2024-04-17 06:56:43.972157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.972297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.972323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.580 qpair failed and we were unable to recover it. 00:30:39.580 [2024-04-17 06:56:43.972509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.972659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.972683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.580 qpair failed and we were unable to recover it. 00:30:39.580 [2024-04-17 06:56:43.972804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.972960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.972984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.580 qpair failed and we were unable to recover it. 00:30:39.580 [2024-04-17 06:56:43.973185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.973341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.973365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.580 qpair failed and we were unable to recover it. 00:30:39.580 [2024-04-17 06:56:43.973566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.973696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.973720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.580 qpair failed and we were unable to recover it. 00:30:39.580 [2024-04-17 06:56:43.973919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.974067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.974092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.580 qpair failed and we were unable to recover it. 00:30:39.580 [2024-04-17 06:56:43.974227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.974366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.974391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.580 qpair failed and we were unable to recover it. 00:30:39.580 [2024-04-17 06:56:43.974585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.974766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.974808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.580 qpair failed and we were unable to recover it. 00:30:39.580 [2024-04-17 06:56:43.975013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.580 [2024-04-17 06:56:43.975189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.975216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.581 qpair failed and we were unable to recover it. 00:30:39.581 [2024-04-17 06:56:43.975392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.975537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.975560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.581 qpair failed and we were unable to recover it. 00:30:39.581 [2024-04-17 06:56:43.975772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.975903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.975928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.581 qpair failed and we were unable to recover it. 00:30:39.581 [2024-04-17 06:56:43.976122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.976296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.976337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.581 qpair failed and we were unable to recover it. 00:30:39.581 [2024-04-17 06:56:43.976525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.976652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.976693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.581 qpair failed and we were unable to recover it. 00:30:39.581 [2024-04-17 06:56:43.976878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.977049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.977075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.581 qpair failed and we were unable to recover it. 00:30:39.581 [2024-04-17 06:56:43.977239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.977400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.977427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.581 qpair failed and we were unable to recover it. 00:30:39.581 [2024-04-17 06:56:43.977612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.977785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.977812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.581 qpair failed and we were unable to recover it. 00:30:39.581 [2024-04-17 06:56:43.977949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.978129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.978152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.581 qpair failed and we were unable to recover it. 00:30:39.581 [2024-04-17 06:56:43.978303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.978472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.978499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.581 qpair failed and we were unable to recover it. 00:30:39.581 [2024-04-17 06:56:43.978677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.978825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.978864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.581 qpair failed and we were unable to recover it. 00:30:39.581 [2024-04-17 06:56:43.979014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.979188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.979219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.581 qpair failed and we were unable to recover it. 00:30:39.581 [2024-04-17 06:56:43.979364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.979564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.979592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.581 qpair failed and we were unable to recover it. 00:30:39.581 [2024-04-17 06:56:43.979759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.979882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.979905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.581 qpair failed and we were unable to recover it. 00:30:39.581 [2024-04-17 06:56:43.980045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.980224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.980251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.581 qpair failed and we were unable to recover it. 00:30:39.581 [2024-04-17 06:56:43.980455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.980654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.980680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.581 qpair failed and we were unable to recover it. 00:30:39.581 [2024-04-17 06:56:43.980863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.980992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.981016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.581 qpair failed and we were unable to recover it. 00:30:39.581 [2024-04-17 06:56:43.981173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.981333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.981360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.581 qpair failed and we were unable to recover it. 00:30:39.581 [2024-04-17 06:56:43.981514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.981679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.981704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.581 qpair failed and we were unable to recover it. 00:30:39.581 [2024-04-17 06:56:43.981863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.981992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.982018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.581 qpair failed and we were unable to recover it. 00:30:39.581 [2024-04-17 06:56:43.982193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.982343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.982370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.581 qpair failed and we were unable to recover it. 00:30:39.581 [2024-04-17 06:56:43.982533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.982731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.982757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.581 qpair failed and we were unable to recover it. 00:30:39.581 [2024-04-17 06:56:43.982909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.983036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.983060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.581 qpair failed and we were unable to recover it. 00:30:39.581 [2024-04-17 06:56:43.983207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.983345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.983372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.581 qpair failed and we were unable to recover it. 00:30:39.581 [2024-04-17 06:56:43.983588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.983708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.983732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.581 qpair failed and we were unable to recover it. 00:30:39.581 [2024-04-17 06:56:43.983895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.984060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.984086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.581 qpair failed and we were unable to recover it. 00:30:39.581 [2024-04-17 06:56:43.984256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.984392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.984417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.581 qpair failed and we were unable to recover it. 00:30:39.581 [2024-04-17 06:56:43.984552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.984735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.984762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.581 qpair failed and we were unable to recover it. 00:30:39.581 [2024-04-17 06:56:43.984903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.985055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.581 [2024-04-17 06:56:43.985078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.581 qpair failed and we were unable to recover it. 00:30:39.581 [2024-04-17 06:56:43.985241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.985385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.985412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.582 qpair failed and we were unable to recover it. 00:30:39.582 [2024-04-17 06:56:43.985555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.985722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.985748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.582 qpair failed and we were unable to recover it. 00:30:39.582 [2024-04-17 06:56:43.985914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.986039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.986062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.582 qpair failed and we were unable to recover it. 00:30:39.582 [2024-04-17 06:56:43.986232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.986376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.986402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.582 qpair failed and we were unable to recover it. 00:30:39.582 [2024-04-17 06:56:43.986543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.986717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.986740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.582 qpair failed and we were unable to recover it. 00:30:39.582 [2024-04-17 06:56:43.986895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.987076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.987101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.582 qpair failed and we were unable to recover it. 00:30:39.582 [2024-04-17 06:56:43.987298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.987472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.987497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.582 qpair failed and we were unable to recover it. 00:30:39.582 [2024-04-17 06:56:43.987644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.987792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.987818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.582 qpair failed and we were unable to recover it. 00:30:39.582 [2024-04-17 06:56:43.987984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.988164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.988195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.582 qpair failed and we were unable to recover it. 00:30:39.582 [2024-04-17 06:56:43.988319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.988473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.988500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.582 qpair failed and we were unable to recover it. 00:30:39.582 [2024-04-17 06:56:43.988724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.988880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.988904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.582 qpair failed and we were unable to recover it. 00:30:39.582 [2024-04-17 06:56:43.989059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.989231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.989258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.582 qpair failed and we were unable to recover it. 00:30:39.582 [2024-04-17 06:56:43.989454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.989589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.989616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.582 qpair failed and we were unable to recover it. 00:30:39.582 [2024-04-17 06:56:43.989777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.989914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.989942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.582 qpair failed and we were unable to recover it. 00:30:39.582 [2024-04-17 06:56:43.990091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.990233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.990260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.582 qpair failed and we were unable to recover it. 00:30:39.582 [2024-04-17 06:56:43.990433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.990563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.990587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.582 qpair failed and we were unable to recover it. 00:30:39.582 [2024-04-17 06:56:43.990771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.990903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.990927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.582 qpair failed and we were unable to recover it. 00:30:39.582 [2024-04-17 06:56:43.991146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.991305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.991329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.582 qpair failed and we were unable to recover it. 00:30:39.582 [2024-04-17 06:56:43.991451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.991578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.991601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.582 qpair failed and we were unable to recover it. 00:30:39.582 [2024-04-17 06:56:43.991720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.991874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.991898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.582 qpair failed and we were unable to recover it. 00:30:39.582 [2024-04-17 06:56:43.992061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.992243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.992285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.582 qpair failed and we were unable to recover it. 00:30:39.582 [2024-04-17 06:56:43.992495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.992653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.992676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.582 qpair failed and we were unable to recover it. 00:30:39.582 [2024-04-17 06:56:43.992863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.993065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.993090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.582 qpair failed and we were unable to recover it. 00:30:39.582 [2024-04-17 06:56:43.993247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.993378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.993401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.582 qpair failed and we were unable to recover it. 00:30:39.582 [2024-04-17 06:56:43.993563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.993715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.993739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.582 qpair failed and we were unable to recover it. 00:30:39.582 [2024-04-17 06:56:43.993947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.994094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.994119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.582 qpair failed and we were unable to recover it. 00:30:39.582 [2024-04-17 06:56:43.994324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.994452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.994478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.582 qpair failed and we were unable to recover it. 00:30:39.582 [2024-04-17 06:56:43.994635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.994789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.994815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.582 qpair failed and we were unable to recover it. 00:30:39.582 [2024-04-17 06:56:43.994970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.995134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.582 [2024-04-17 06:56:43.995161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.582 qpair failed and we were unable to recover it. 00:30:39.583 [2024-04-17 06:56:43.995350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:43.995509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:43.995534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.583 qpair failed and we were unable to recover it. 00:30:39.583 [2024-04-17 06:56:43.995656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:43.995827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:43.995851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.583 qpair failed and we were unable to recover it. 00:30:39.583 [2024-04-17 06:56:43.996024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:43.996215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:43.996240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.583 qpair failed and we were unable to recover it. 00:30:39.583 [2024-04-17 06:56:43.996362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:43.996511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:43.996535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.583 qpair failed and we were unable to recover it. 00:30:39.583 [2024-04-17 06:56:43.996678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:43.996858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:43.996882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.583 qpair failed and we were unable to recover it. 00:30:39.583 [2024-04-17 06:56:43.997062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:43.997243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:43.997271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.583 qpair failed and we were unable to recover it. 00:30:39.583 [2024-04-17 06:56:43.997475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:43.997647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:43.997674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.583 qpair failed and we were unable to recover it. 00:30:39.583 [2024-04-17 06:56:43.997822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:43.997968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:43.997996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.583 qpair failed and we were unable to recover it. 00:30:39.583 [2024-04-17 06:56:43.998206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:43.998338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:43.998363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.583 qpair failed and we were unable to recover it. 00:30:39.583 [2024-04-17 06:56:43.998517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:43.998634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:43.998657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.583 qpair failed and we were unable to recover it. 00:30:39.583 [2024-04-17 06:56:43.998844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:43.998984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:43.999012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.583 qpair failed and we were unable to recover it. 00:30:39.583 [2024-04-17 06:56:43.999147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:43.999310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:43.999338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.583 qpair failed and we were unable to recover it. 00:30:39.583 [2024-04-17 06:56:43.999539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:43.999716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:43.999745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.583 qpair failed and we were unable to recover it. 00:30:39.583 [2024-04-17 06:56:43.999910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:44.000100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:44.000126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.583 qpair failed and we were unable to recover it. 00:30:39.583 [2024-04-17 06:56:44.000270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:44.000431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:44.000458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.583 qpair failed and we were unable to recover it. 00:30:39.583 [2024-04-17 06:56:44.000660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:44.000829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:44.000857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.583 qpair failed and we were unable to recover it. 00:30:39.583 [2024-04-17 06:56:44.001018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:44.001149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:44.001182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.583 qpair failed and we were unable to recover it. 00:30:39.583 [2024-04-17 06:56:44.001333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:44.001479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:44.001507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.583 qpair failed and we were unable to recover it. 00:30:39.583 [2024-04-17 06:56:44.001702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:44.001868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:44.001895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.583 qpair failed and we were unable to recover it. 00:30:39.583 [2024-04-17 06:56:44.002033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:44.002168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:44.002202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.583 qpair failed and we were unable to recover it. 00:30:39.583 [2024-04-17 06:56:44.002374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:44.002541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:44.002568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.583 qpair failed and we were unable to recover it. 00:30:39.583 [2024-04-17 06:56:44.002750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:44.002902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:44.002925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.583 qpair failed and we were unable to recover it. 00:30:39.583 [2024-04-17 06:56:44.003061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:44.003215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:44.003243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.583 qpair failed and we were unable to recover it. 00:30:39.583 [2024-04-17 06:56:44.003411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:44.003563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:44.003591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.583 qpair failed and we were unable to recover it. 00:30:39.583 [2024-04-17 06:56:44.003745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:44.003870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:44.003893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.583 qpair failed and we were unable to recover it. 00:30:39.583 [2024-04-17 06:56:44.004044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:44.004198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:44.004227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.583 qpair failed and we were unable to recover it. 00:30:39.583 [2024-04-17 06:56:44.004390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:44.004563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:44.004589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.583 qpair failed and we were unable to recover it. 00:30:39.583 [2024-04-17 06:56:44.004762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:44.004888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:44.004916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.583 qpair failed and we were unable to recover it. 00:30:39.583 [2024-04-17 06:56:44.005070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.583 [2024-04-17 06:56:44.005210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.005237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.584 qpair failed and we were unable to recover it. 00:30:39.584 [2024-04-17 06:56:44.005381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.005520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.005547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.584 qpair failed and we were unable to recover it. 00:30:39.584 [2024-04-17 06:56:44.005731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.005858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.005899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.584 qpair failed and we were unable to recover it. 00:30:39.584 [2024-04-17 06:56:44.006067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.006242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.006269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.584 qpair failed and we were unable to recover it. 00:30:39.584 [2024-04-17 06:56:44.006433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.006580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.006607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.584 qpair failed and we were unable to recover it. 00:30:39.584 [2024-04-17 06:56:44.006778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.006972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.006998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.584 qpair failed and we were unable to recover it. 00:30:39.584 [2024-04-17 06:56:44.007173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.007326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.007351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.584 qpair failed and we were unable to recover it. 00:30:39.584 [2024-04-17 06:56:44.007487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.007647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.007673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.584 qpair failed and we were unable to recover it. 00:30:39.584 [2024-04-17 06:56:44.007824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.007949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.007973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.584 qpair failed and we were unable to recover it. 00:30:39.584 [2024-04-17 06:56:44.008127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.008257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.008286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.584 qpair failed and we were unable to recover it. 00:30:39.584 [2024-04-17 06:56:44.008413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.008596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.008621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.584 qpair failed and we were unable to recover it. 00:30:39.584 [2024-04-17 06:56:44.008746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.008914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.008938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.584 qpair failed and we were unable to recover it. 00:30:39.584 [2024-04-17 06:56:44.009097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.009247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.009287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.584 qpair failed and we were unable to recover it. 00:30:39.584 [2024-04-17 06:56:44.009498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.009628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.009670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.584 qpair failed and we were unable to recover it. 00:30:39.584 [2024-04-17 06:56:44.009856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.010033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.010074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.584 qpair failed and we were unable to recover it. 00:30:39.584 [2024-04-17 06:56:44.010216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.010389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.010413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.584 qpair failed and we were unable to recover it. 00:30:39.584 [2024-04-17 06:56:44.010570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.010702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.010726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.584 qpair failed and we were unable to recover it. 00:30:39.584 [2024-04-17 06:56:44.010873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.010996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.011019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.584 qpair failed and we were unable to recover it. 00:30:39.584 [2024-04-17 06:56:44.011149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.011271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.011295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.584 qpair failed and we were unable to recover it. 00:30:39.584 [2024-04-17 06:56:44.011475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.011606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.011634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.584 qpair failed and we were unable to recover it. 00:30:39.584 [2024-04-17 06:56:44.011803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.011937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.011961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.584 qpair failed and we were unable to recover it. 00:30:39.584 [2024-04-17 06:56:44.012093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.012243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.012272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.584 qpair failed and we were unable to recover it. 00:30:39.584 [2024-04-17 06:56:44.012449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.012581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.012606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.584 qpair failed and we were unable to recover it. 00:30:39.584 [2024-04-17 06:56:44.012775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.012944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.012971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.584 qpair failed and we were unable to recover it. 00:30:39.584 [2024-04-17 06:56:44.013151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.013296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.013320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.584 qpair failed and we were unable to recover it. 00:30:39.584 [2024-04-17 06:56:44.013452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.013604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.013629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.584 qpair failed and we were unable to recover it. 00:30:39.584 [2024-04-17 06:56:44.013797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.013950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.013991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.584 qpair failed and we were unable to recover it. 00:30:39.584 [2024-04-17 06:56:44.014161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.014311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.014338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.584 qpair failed and we were unable to recover it. 00:30:39.584 [2024-04-17 06:56:44.014479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.014621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.584 [2024-04-17 06:56:44.014647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.584 qpair failed and we were unable to recover it. 00:30:39.584 [2024-04-17 06:56:44.014801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.014974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.015002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.585 qpair failed and we were unable to recover it. 00:30:39.585 [2024-04-17 06:56:44.015190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.015368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.015392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.585 qpair failed and we were unable to recover it. 00:30:39.585 [2024-04-17 06:56:44.015537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.015659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.015683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.585 qpair failed and we were unable to recover it. 00:30:39.585 [2024-04-17 06:56:44.015866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.016043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.016071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.585 qpair failed and we were unable to recover it. 00:30:39.585 [2024-04-17 06:56:44.016250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.016426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.016453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.585 qpair failed and we were unable to recover it. 00:30:39.585 [2024-04-17 06:56:44.016595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.016781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.016805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.585 qpair failed and we were unable to recover it. 00:30:39.585 [2024-04-17 06:56:44.016960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.017132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.017159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.585 qpair failed and we were unable to recover it. 00:30:39.585 [2024-04-17 06:56:44.017337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.017483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.017510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.585 qpair failed and we were unable to recover it. 00:30:39.585 [2024-04-17 06:56:44.017647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.017783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.017809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.585 qpair failed and we were unable to recover it. 00:30:39.585 [2024-04-17 06:56:44.017995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.018166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.018200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.585 qpair failed and we were unable to recover it. 00:30:39.585 [2024-04-17 06:56:44.018347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.018480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.018507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.585 qpair failed and we were unable to recover it. 00:30:39.585 [2024-04-17 06:56:44.018647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.018804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.018830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.585 qpair failed and we were unable to recover it. 00:30:39.585 [2024-04-17 06:56:44.019010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.019147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.019194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.585 qpair failed and we were unable to recover it. 00:30:39.585 [2024-04-17 06:56:44.019337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.019485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.019512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.585 qpair failed and we were unable to recover it. 00:30:39.585 [2024-04-17 06:56:44.019644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.019791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.019818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.585 qpair failed and we were unable to recover it. 00:30:39.585 [2024-04-17 06:56:44.020020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.020193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.020220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.585 qpair failed and we were unable to recover it. 00:30:39.585 [2024-04-17 06:56:44.020402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.020590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.020614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.585 qpair failed and we were unable to recover it. 00:30:39.585 [2024-04-17 06:56:44.020796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.020986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.021012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.585 qpair failed and we were unable to recover it. 00:30:39.585 [2024-04-17 06:56:44.021218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.021376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.021402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.585 qpair failed and we were unable to recover it. 00:30:39.585 [2024-04-17 06:56:44.021549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.021752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.021779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.585 qpair failed and we were unable to recover it. 00:30:39.585 [2024-04-17 06:56:44.021952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.022100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.022127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.585 qpair failed and we were unable to recover it. 00:30:39.585 [2024-04-17 06:56:44.022300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.022429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.022453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.585 qpair failed and we were unable to recover it. 00:30:39.585 [2024-04-17 06:56:44.022625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.022837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.022866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.585 qpair failed and we were unable to recover it. 00:30:39.585 [2024-04-17 06:56:44.023023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.023204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.585 [2024-04-17 06:56:44.023255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.585 qpair failed and we were unable to recover it. 00:30:39.585 [2024-04-17 06:56:44.023442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.023614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.023662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.586 qpair failed and we were unable to recover it. 00:30:39.586 [2024-04-17 06:56:44.023805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.023950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.023977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.586 qpair failed and we were unable to recover it. 00:30:39.586 [2024-04-17 06:56:44.024145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.024310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.024337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.586 qpair failed and we were unable to recover it. 00:30:39.586 [2024-04-17 06:56:44.024488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.024619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.024642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.586 qpair failed and we were unable to recover it. 00:30:39.586 [2024-04-17 06:56:44.024799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.024942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.024970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.586 qpair failed and we were unable to recover it. 00:30:39.586 [2024-04-17 06:56:44.025142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.025295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.025322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.586 qpair failed and we were unable to recover it. 00:30:39.586 [2024-04-17 06:56:44.025504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.025634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.025674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.586 qpair failed and we were unable to recover it. 00:30:39.586 [2024-04-17 06:56:44.025851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.025989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.026015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.586 qpair failed and we were unable to recover it. 00:30:39.586 [2024-04-17 06:56:44.026189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.026337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.026364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.586 qpair failed and we were unable to recover it. 00:30:39.586 [2024-04-17 06:56:44.026547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.026704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.026728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.586 qpair failed and we were unable to recover it. 00:30:39.586 [2024-04-17 06:56:44.026876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.027014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.027040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.586 qpair failed and we were unable to recover it. 00:30:39.586 [2024-04-17 06:56:44.027228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.027412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.027436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.586 qpair failed and we were unable to recover it. 00:30:39.586 [2024-04-17 06:56:44.027560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.027684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.027709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.586 qpair failed and we were unable to recover it. 00:30:39.586 [2024-04-17 06:56:44.027840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.027986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.028015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.586 qpair failed and we were unable to recover it. 00:30:39.586 [2024-04-17 06:56:44.028230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.028355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.028380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.586 qpair failed and we were unable to recover it. 00:30:39.586 [2024-04-17 06:56:44.028534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.028654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.028678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.586 qpair failed and we were unable to recover it. 00:30:39.586 [2024-04-17 06:56:44.028811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.028998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.029023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.586 qpair failed and we were unable to recover it. 00:30:39.586 [2024-04-17 06:56:44.029151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.029317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.029345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.586 qpair failed and we were unable to recover it. 00:30:39.586 [2024-04-17 06:56:44.029505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.029664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.029688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.586 qpair failed and we were unable to recover it. 00:30:39.586 [2024-04-17 06:56:44.029811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.029963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.029992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.586 qpair failed and we were unable to recover it. 00:30:39.586 [2024-04-17 06:56:44.030158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.030336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.030364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.586 qpair failed and we were unable to recover it. 00:30:39.586 [2024-04-17 06:56:44.030512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.030641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.030665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.586 qpair failed and we were unable to recover it. 00:30:39.586 [2024-04-17 06:56:44.030794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.030983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.031007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.586 qpair failed and we were unable to recover it. 00:30:39.586 [2024-04-17 06:56:44.031152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.031380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.031408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.586 qpair failed and we were unable to recover it. 00:30:39.586 [2024-04-17 06:56:44.031550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.031707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.031730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.586 qpair failed and we were unable to recover it. 00:30:39.586 [2024-04-17 06:56:44.031854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.032039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.032067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.586 qpair failed and we were unable to recover it. 00:30:39.586 [2024-04-17 06:56:44.032218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.032422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.032449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.586 qpair failed and we were unable to recover it. 00:30:39.586 [2024-04-17 06:56:44.032662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.032819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.032845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.586 qpair failed and we were unable to recover it. 00:30:39.586 [2024-04-17 06:56:44.033003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.033184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.586 [2024-04-17 06:56:44.033212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.586 qpair failed and we were unable to recover it. 00:30:39.587 [2024-04-17 06:56:44.033379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.033519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.033548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.587 qpair failed and we were unable to recover it. 00:30:39.587 [2024-04-17 06:56:44.033694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.033849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.033873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.587 qpair failed and we were unable to recover it. 00:30:39.587 [2024-04-17 06:56:44.034029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.034200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.034230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.587 qpair failed and we were unable to recover it. 00:30:39.587 [2024-04-17 06:56:44.034373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.034543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.034571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.587 qpair failed and we were unable to recover it. 00:30:39.587 [2024-04-17 06:56:44.034748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.034901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.034925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.587 qpair failed and we were unable to recover it. 00:30:39.587 [2024-04-17 06:56:44.035086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.035241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.035270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.587 qpair failed and we were unable to recover it. 00:30:39.587 [2024-04-17 06:56:44.035457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.035614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.035638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.587 qpair failed and we were unable to recover it. 00:30:39.587 [2024-04-17 06:56:44.035793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.035973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.035997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.587 qpair failed and we were unable to recover it. 00:30:39.587 [2024-04-17 06:56:44.036159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.036366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.036390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.587 qpair failed and we were unable to recover it. 00:30:39.587 [2024-04-17 06:56:44.036535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.036699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.036724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.587 qpair failed and we were unable to recover it. 00:30:39.587 [2024-04-17 06:56:44.036881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.037053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.037082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.587 qpair failed and we were unable to recover it. 00:30:39.587 [2024-04-17 06:56:44.037258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.037396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.037423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.587 qpair failed and we were unable to recover it. 00:30:39.587 [2024-04-17 06:56:44.037592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.037735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.037763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.587 qpair failed and we were unable to recover it. 00:30:39.587 [2024-04-17 06:56:44.037941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.038097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.038122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.587 qpair failed and we were unable to recover it. 00:30:39.587 [2024-04-17 06:56:44.038263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.038425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.038452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.587 qpair failed and we were unable to recover it. 00:30:39.587 [2024-04-17 06:56:44.038659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.038797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.038824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.587 qpair failed and we were unable to recover it. 00:30:39.587 [2024-04-17 06:56:44.038986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.039183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.039211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.587 qpair failed and we were unable to recover it. 00:30:39.587 [2024-04-17 06:56:44.039380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.039544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.039571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:39.587 qpair failed and we were unable to recover it. 00:30:39.587 [2024-04-17 06:56:44.039809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.040001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.040049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.587 qpair failed and we were unable to recover it. 00:30:39.587 [2024-04-17 06:56:44.040218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.040344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.040369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.587 qpair failed and we were unable to recover it. 00:30:39.587 [2024-04-17 06:56:44.040591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.040718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.040743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.587 qpair failed and we were unable to recover it. 00:30:39.587 [2024-04-17 06:56:44.040886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.041025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.041052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.587 qpair failed and we were unable to recover it. 00:30:39.587 [2024-04-17 06:56:44.041255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.041401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.041428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.587 qpair failed and we were unable to recover it. 00:30:39.587 [2024-04-17 06:56:44.041575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.041790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.041834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.587 qpair failed and we were unable to recover it. 00:30:39.587 [2024-04-17 06:56:44.041972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.042115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.042141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.587 qpair failed and we were unable to recover it. 00:30:39.587 [2024-04-17 06:56:44.042300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.042423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.042448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.587 qpair failed and we were unable to recover it. 00:30:39.587 [2024-04-17 06:56:44.042612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.042745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.042772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.587 qpair failed and we were unable to recover it. 00:30:39.587 [2024-04-17 06:56:44.042939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.043102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.043128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.587 qpair failed and we were unable to recover it. 00:30:39.587 [2024-04-17 06:56:44.043273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.043408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.587 [2024-04-17 06:56:44.043433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.587 qpair failed and we were unable to recover it. 00:30:39.588 [2024-04-17 06:56:44.043613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.043754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.043781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.588 qpair failed and we were unable to recover it. 00:30:39.588 [2024-04-17 06:56:44.043955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.044120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.044146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.588 qpair failed and we were unable to recover it. 00:30:39.588 [2024-04-17 06:56:44.044304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.044425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.044449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.588 qpair failed and we were unable to recover it. 00:30:39.588 [2024-04-17 06:56:44.044619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.044765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.044813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.588 qpair failed and we were unable to recover it. 00:30:39.588 [2024-04-17 06:56:44.044970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.045106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.045135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.588 qpair failed and we were unable to recover it. 00:30:39.588 [2024-04-17 06:56:44.045327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.045448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.045473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.588 qpair failed and we were unable to recover it. 00:30:39.588 [2024-04-17 06:56:44.045650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.045842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.045885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.588 qpair failed and we were unable to recover it. 00:30:39.588 [2024-04-17 06:56:44.046024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.046235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.046263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.588 qpair failed and we were unable to recover it. 00:30:39.588 [2024-04-17 06:56:44.046445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.046620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.046647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.588 qpair failed and we were unable to recover it. 00:30:39.588 [2024-04-17 06:56:44.046851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.047003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.047051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.588 qpair failed and we were unable to recover it. 00:30:39.588 [2024-04-17 06:56:44.047198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.047355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.047379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.588 qpair failed and we were unable to recover it. 00:30:39.588 [2024-04-17 06:56:44.047512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.047643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.047667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.588 qpair failed and we were unable to recover it. 00:30:39.588 [2024-04-17 06:56:44.047786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.047911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.047936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.588 qpair failed and we were unable to recover it. 00:30:39.588 [2024-04-17 06:56:44.048094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.048295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.048320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.588 qpair failed and we were unable to recover it. 00:30:39.588 [2024-04-17 06:56:44.048480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.048636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.048660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.588 qpair failed and we were unable to recover it. 00:30:39.588 [2024-04-17 06:56:44.048791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.048943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.048966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.588 qpair failed and we were unable to recover it. 00:30:39.588 [2024-04-17 06:56:44.049103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.049233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.049258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.588 qpair failed and we were unable to recover it. 00:30:39.588 [2024-04-17 06:56:44.049391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.049547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.049571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.588 qpair failed and we were unable to recover it. 00:30:39.588 [2024-04-17 06:56:44.049739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.049929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.049956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.588 qpair failed and we were unable to recover it. 00:30:39.588 [2024-04-17 06:56:44.050154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.050300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.050329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.588 qpair failed and we were unable to recover it. 00:30:39.588 [2024-04-17 06:56:44.050460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.050587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.050611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.588 qpair failed and we were unable to recover it. 00:30:39.588 [2024-04-17 06:56:44.050771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.050925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.050952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.588 qpair failed and we were unable to recover it. 00:30:39.588 [2024-04-17 06:56:44.051121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.051293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.051317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.588 qpair failed and we were unable to recover it. 00:30:39.588 [2024-04-17 06:56:44.051449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.051592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.051616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.588 qpair failed and we were unable to recover it. 00:30:39.588 [2024-04-17 06:56:44.051736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.051893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.051916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.588 qpair failed and we were unable to recover it. 00:30:39.588 [2024-04-17 06:56:44.052080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.052245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.052269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.588 qpair failed and we were unable to recover it. 00:30:39.588 [2024-04-17 06:56:44.052405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.052531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.052555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.588 qpair failed and we were unable to recover it. 00:30:39.588 [2024-04-17 06:56:44.052678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.052816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.052840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.588 qpair failed and we were unable to recover it. 00:30:39.588 [2024-04-17 06:56:44.052978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.053128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.588 [2024-04-17 06:56:44.053152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.588 qpair failed and we were unable to recover it. 00:30:39.589 [2024-04-17 06:56:44.053286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.053430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.053455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.589 qpair failed and we were unable to recover it. 00:30:39.589 [2024-04-17 06:56:44.053644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.053836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.053865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.589 qpair failed and we were unable to recover it. 00:30:39.589 [2024-04-17 06:56:44.054090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.054217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.054241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.589 qpair failed and we were unable to recover it. 00:30:39.589 [2024-04-17 06:56:44.054395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.054563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.054605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.589 qpair failed and we were unable to recover it. 00:30:39.589 [2024-04-17 06:56:44.054780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.054903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.054928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.589 qpair failed and we were unable to recover it. 00:30:39.589 [2024-04-17 06:56:44.055098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.055241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.055266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.589 qpair failed and we were unable to recover it. 00:30:39.589 [2024-04-17 06:56:44.055403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.055536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.055575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.589 qpair failed and we were unable to recover it. 00:30:39.589 [2024-04-17 06:56:44.055719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.055918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.055942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.589 qpair failed and we were unable to recover it. 00:30:39.589 [2024-04-17 06:56:44.056117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.056284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.056308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.589 qpair failed and we were unable to recover it. 00:30:39.589 [2024-04-17 06:56:44.056438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.056599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.056623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.589 qpair failed and we were unable to recover it. 00:30:39.589 [2024-04-17 06:56:44.056799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.056976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.057003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.589 qpair failed and we were unable to recover it. 00:30:39.589 [2024-04-17 06:56:44.057168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.057347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.057372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.589 qpair failed and we were unable to recover it. 00:30:39.589 [2024-04-17 06:56:44.057520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.057724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.057753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.589 qpair failed and we were unable to recover it. 00:30:39.589 [2024-04-17 06:56:44.057934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.058066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.058092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.589 qpair failed and we were unable to recover it. 00:30:39.589 [2024-04-17 06:56:44.058254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.058419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.058446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.589 qpair failed and we were unable to recover it. 00:30:39.589 [2024-04-17 06:56:44.058588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.058741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.058781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.589 qpair failed and we were unable to recover it. 00:30:39.589 [2024-04-17 06:56:44.058952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.059092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.059116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.589 qpair failed and we were unable to recover it. 00:30:39.589 [2024-04-17 06:56:44.059302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.059470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.059512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.589 qpair failed and we were unable to recover it. 00:30:39.589 [2024-04-17 06:56:44.059680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.059837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.059878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.589 qpair failed and we were unable to recover it. 00:30:39.589 [2024-04-17 06:56:44.060077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.060220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.060248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.589 qpair failed and we were unable to recover it. 00:30:39.589 [2024-04-17 06:56:44.060427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.060560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.060583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.589 qpair failed and we were unable to recover it. 00:30:39.589 [2024-04-17 06:56:44.060704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.060836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.060860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.589 qpair failed and we were unable to recover it. 00:30:39.589 [2024-04-17 06:56:44.061075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.061213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.061240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.589 qpair failed and we were unable to recover it. 00:30:39.589 [2024-04-17 06:56:44.061411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.061577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.061603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.589 qpair failed and we were unable to recover it. 00:30:39.589 [2024-04-17 06:56:44.061763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.061885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.061909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.589 qpair failed and we were unable to recover it. 00:30:39.589 [2024-04-17 06:56:44.062091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.589 [2024-04-17 06:56:44.062268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.062293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.590 qpair failed and we were unable to recover it. 00:30:39.590 [2024-04-17 06:56:44.062425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.062594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.062621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.590 qpair failed and we were unable to recover it. 00:30:39.590 [2024-04-17 06:56:44.062796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.062946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.062988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.590 qpair failed and we were unable to recover it. 00:30:39.590 [2024-04-17 06:56:44.063186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.063321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.063348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.590 qpair failed and we were unable to recover it. 00:30:39.590 [2024-04-17 06:56:44.063518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.063713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.063740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.590 qpair failed and we were unable to recover it. 00:30:39.590 [2024-04-17 06:56:44.063935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.064106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.064132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.590 qpair failed and we were unable to recover it. 00:30:39.590 [2024-04-17 06:56:44.064320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.064506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.064533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.590 qpair failed and we were unable to recover it. 00:30:39.590 [2024-04-17 06:56:44.064684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.064888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.064917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.590 qpair failed and we were unable to recover it. 00:30:39.590 [2024-04-17 06:56:44.065078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.065240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.065265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.590 qpair failed and we were unable to recover it. 00:30:39.590 [2024-04-17 06:56:44.065415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.065582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.065609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.590 qpair failed and we were unable to recover it. 00:30:39.590 [2024-04-17 06:56:44.065757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.065888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.065912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.590 qpair failed and we were unable to recover it. 00:30:39.590 [2024-04-17 06:56:44.066091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.066261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.066289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.590 qpair failed and we were unable to recover it. 00:30:39.590 [2024-04-17 06:56:44.066464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.066655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.066679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.590 qpair failed and we were unable to recover it. 00:30:39.590 [2024-04-17 06:56:44.066833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.066978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.067005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.590 qpair failed and we were unable to recover it. 00:30:39.590 [2024-04-17 06:56:44.067153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.067287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.067312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.590 qpair failed and we were unable to recover it. 00:30:39.590 [2024-04-17 06:56:44.067497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.067621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.067645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.590 qpair failed and we were unable to recover it. 00:30:39.590 [2024-04-17 06:56:44.067802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.067975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.068003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.590 qpair failed and we were unable to recover it. 00:30:39.590 [2024-04-17 06:56:44.068202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.068340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.068364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.590 qpair failed and we were unable to recover it. 00:30:39.590 [2024-04-17 06:56:44.068506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.068719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.068761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.590 qpair failed and we were unable to recover it. 00:30:39.590 [2024-04-17 06:56:44.068948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.069111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.069137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.590 qpair failed and we were unable to recover it. 00:30:39.590 [2024-04-17 06:56:44.069287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.069434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.069458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.590 qpair failed and we were unable to recover it. 00:30:39.590 [2024-04-17 06:56:44.069656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.069779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.069803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.590 qpair failed and we were unable to recover it. 00:30:39.590 [2024-04-17 06:56:44.069955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.070160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.070193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.590 qpair failed and we were unable to recover it. 00:30:39.590 [2024-04-17 06:56:44.070347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.070502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.070527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.590 qpair failed and we were unable to recover it. 00:30:39.590 [2024-04-17 06:56:44.070668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.070821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.070845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.590 qpair failed and we were unable to recover it. 00:30:39.590 [2024-04-17 06:56:44.071026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.071210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.071234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.590 qpair failed and we were unable to recover it. 00:30:39.590 [2024-04-17 06:56:44.071362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.071513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.071536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.590 qpair failed and we were unable to recover it. 00:30:39.590 [2024-04-17 06:56:44.071752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.071947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.071974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.590 qpair failed and we were unable to recover it. 00:30:39.590 [2024-04-17 06:56:44.072139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.590 [2024-04-17 06:56:44.072293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.072317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.591 qpair failed and we were unable to recover it. 00:30:39.591 [2024-04-17 06:56:44.072467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.072587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.072613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.591 qpair failed and we were unable to recover it. 00:30:39.591 [2024-04-17 06:56:44.072785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.072934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.072961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.591 qpair failed and we were unable to recover it. 00:30:39.591 [2024-04-17 06:56:44.073099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.073290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.073315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.591 qpair failed and we were unable to recover it. 00:30:39.591 [2024-04-17 06:56:44.073497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.073633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.073661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.591 qpair failed and we were unable to recover it. 00:30:39.591 [2024-04-17 06:56:44.073832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.074040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.074064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.591 qpair failed and we were unable to recover it. 00:30:39.591 [2024-04-17 06:56:44.074189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.074312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.074336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.591 qpair failed and we were unable to recover it. 00:30:39.591 [2024-04-17 06:56:44.074466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.074649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.074691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.591 qpair failed and we were unable to recover it. 00:30:39.591 [2024-04-17 06:56:44.074860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.075001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.075028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.591 qpair failed and we were unable to recover it. 00:30:39.591 [2024-04-17 06:56:44.075193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.075376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.075400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.591 qpair failed and we were unable to recover it. 00:30:39.591 [2024-04-17 06:56:44.075592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.075733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.075756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.591 qpair failed and we were unable to recover it. 00:30:39.591 [2024-04-17 06:56:44.075914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.076081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.076105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.591 qpair failed and we were unable to recover it. 00:30:39.591 [2024-04-17 06:56:44.076296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.076423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.076447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.591 qpair failed and we were unable to recover it. 00:30:39.591 [2024-04-17 06:56:44.076629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.076765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.076792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.591 qpair failed and we were unable to recover it. 00:30:39.591 [2024-04-17 06:56:44.076928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.077103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.077129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.591 qpair failed and we were unable to recover it. 00:30:39.591 [2024-04-17 06:56:44.077320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.077538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.077580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.591 qpair failed and we were unable to recover it. 00:30:39.591 [2024-04-17 06:56:44.077784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.077940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.077963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.591 qpair failed and we were unable to recover it. 00:30:39.591 [2024-04-17 06:56:44.078142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.078302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.078329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.591 qpair failed and we were unable to recover it. 00:30:39.591 [2024-04-17 06:56:44.078526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.078722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.078750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.591 qpair failed and we were unable to recover it. 00:30:39.591 [2024-04-17 06:56:44.078941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.079074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.079097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.591 qpair failed and we were unable to recover it. 00:30:39.591 [2024-04-17 06:56:44.079295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.079432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.079458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.591 qpair failed and we were unable to recover it. 00:30:39.591 [2024-04-17 06:56:44.079648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.079819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.079842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.591 qpair failed and we were unable to recover it. 00:30:39.591 [2024-04-17 06:56:44.080026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.080185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.080227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.591 qpair failed and we were unable to recover it. 00:30:39.591 [2024-04-17 06:56:44.080431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.080565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.080590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.591 qpair failed and we were unable to recover it. 00:30:39.591 [2024-04-17 06:56:44.080762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.080901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.080927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.591 qpair failed and we were unable to recover it. 00:30:39.591 [2024-04-17 06:56:44.081073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.081227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.081267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.591 qpair failed and we were unable to recover it. 00:30:39.591 [2024-04-17 06:56:44.081435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.081579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.081602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.591 qpair failed and we were unable to recover it. 00:30:39.591 [2024-04-17 06:56:44.081768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.081985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.082008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.591 qpair failed and we were unable to recover it. 00:30:39.591 [2024-04-17 06:56:44.082164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.082324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.082351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.591 qpair failed and we were unable to recover it. 00:30:39.591 [2024-04-17 06:56:44.082499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.591 [2024-04-17 06:56:44.082663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.082689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.592 qpair failed and we were unable to recover it. 00:30:39.592 [2024-04-17 06:56:44.082858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.082992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.083018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.592 qpair failed and we were unable to recover it. 00:30:39.592 [2024-04-17 06:56:44.083241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.083415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.083443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.592 qpair failed and we were unable to recover it. 00:30:39.592 [2024-04-17 06:56:44.083641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.083854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.083897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.592 qpair failed and we were unable to recover it. 00:30:39.592 [2024-04-17 06:56:44.084070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.084280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.084304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.592 qpair failed and we were unable to recover it. 00:30:39.592 [2024-04-17 06:56:44.084466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.084626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.084654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.592 qpair failed and we were unable to recover it. 00:30:39.592 [2024-04-17 06:56:44.084845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.085043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.085069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.592 qpair failed and we were unable to recover it. 00:30:39.592 [2024-04-17 06:56:44.085252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.085392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.085420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.592 qpair failed and we were unable to recover it. 00:30:39.592 [2024-04-17 06:56:44.085605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.085754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.085778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.592 qpair failed and we were unable to recover it. 00:30:39.592 [2024-04-17 06:56:44.085964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.086122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.086149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.592 qpair failed and we were unable to recover it. 00:30:39.592 [2024-04-17 06:56:44.086324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.086490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.086522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.592 qpair failed and we were unable to recover it. 00:30:39.592 [2024-04-17 06:56:44.086669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.086846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.086887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.592 qpair failed and we were unable to recover it. 00:30:39.592 [2024-04-17 06:56:44.087047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.087211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.087238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.592 qpair failed and we were unable to recover it. 00:30:39.592 [2024-04-17 06:56:44.087389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.087576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.087604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.592 qpair failed and we were unable to recover it. 00:30:39.592 [2024-04-17 06:56:44.087826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.088026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.088052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.592 qpair failed and we were unable to recover it. 00:30:39.592 [2024-04-17 06:56:44.088197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.088336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.088362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.592 qpair failed and we were unable to recover it. 00:30:39.592 [2024-04-17 06:56:44.088529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.088692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.088716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.592 qpair failed and we were unable to recover it. 00:30:39.592 [2024-04-17 06:56:44.088905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.089057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.089082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.592 qpair failed and we were unable to recover it. 00:30:39.592 [2024-04-17 06:56:44.089308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.089438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.089462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.592 qpair failed and we were unable to recover it. 00:30:39.592 [2024-04-17 06:56:44.089627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.089804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.089830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.592 qpair failed and we were unable to recover it. 00:30:39.592 [2024-04-17 06:56:44.089963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.090117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.090158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.592 qpair failed and we were unable to recover it. 00:30:39.592 [2024-04-17 06:56:44.090354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.090479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.090503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.592 qpair failed and we were unable to recover it. 00:30:39.592 [2024-04-17 06:56:44.090656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.090857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.090880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.592 qpair failed and we were unable to recover it. 00:30:39.592 [2024-04-17 06:56:44.091009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.091160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.091206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.592 qpair failed and we were unable to recover it. 00:30:39.592 [2024-04-17 06:56:44.091344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.091481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.091508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.592 qpair failed and we were unable to recover it. 00:30:39.592 [2024-04-17 06:56:44.091651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.091786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.091812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.592 qpair failed and we were unable to recover it. 00:30:39.592 [2024-04-17 06:56:44.091962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.092085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.092110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.592 qpair failed and we were unable to recover it. 00:30:39.592 [2024-04-17 06:56:44.092349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.092491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.092517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.592 qpair failed and we were unable to recover it. 00:30:39.592 [2024-04-17 06:56:44.092701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.592 [2024-04-17 06:56:44.092857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.092881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.593 qpair failed and we were unable to recover it. 00:30:39.593 [2024-04-17 06:56:44.093078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.093216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.093242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.593 qpair failed and we were unable to recover it. 00:30:39.593 [2024-04-17 06:56:44.093380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.093532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.093558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.593 qpair failed and we were unable to recover it. 00:30:39.593 [2024-04-17 06:56:44.093732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.093911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.093934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.593 qpair failed and we were unable to recover it. 00:30:39.593 [2024-04-17 06:56:44.094067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.094220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.094245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.593 qpair failed and we were unable to recover it. 00:30:39.593 [2024-04-17 06:56:44.094377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.094560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.094587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.593 qpair failed and we were unable to recover it. 00:30:39.593 [2024-04-17 06:56:44.094768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.094900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.094925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.593 qpair failed and we were unable to recover it. 00:30:39.593 [2024-04-17 06:56:44.095057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.095189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.095214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.593 qpair failed and we were unable to recover it. 00:30:39.593 [2024-04-17 06:56:44.095358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.095536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.095560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.593 qpair failed and we were unable to recover it. 00:30:39.593 [2024-04-17 06:56:44.095723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.095869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.095895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.593 qpair failed and we were unable to recover it. 00:30:39.593 [2024-04-17 06:56:44.096062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.096189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.096241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.593 qpair failed and we were unable to recover it. 00:30:39.593 [2024-04-17 06:56:44.096375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.096520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.096548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.593 qpair failed and we were unable to recover it. 00:30:39.593 [2024-04-17 06:56:44.096720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.096854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.096880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.593 qpair failed and we were unable to recover it. 00:30:39.593 [2024-04-17 06:56:44.097031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.097206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.097230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.593 qpair failed and we were unable to recover it. 00:30:39.593 [2024-04-17 06:56:44.097386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.097552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.097578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.593 qpair failed and we were unable to recover it. 00:30:39.593 [2024-04-17 06:56:44.097750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.097915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.097941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.593 qpair failed and we were unable to recover it. 00:30:39.593 [2024-04-17 06:56:44.098124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.098268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.098301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.593 qpair failed and we were unable to recover it. 00:30:39.593 [2024-04-17 06:56:44.098500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.098697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.098723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.593 qpair failed and we were unable to recover it. 00:30:39.593 [2024-04-17 06:56:44.098868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.099036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.099061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.593 qpair failed and we were unable to recover it. 00:30:39.593 [2024-04-17 06:56:44.099241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.099367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.099407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.593 qpair failed and we were unable to recover it. 00:30:39.593 [2024-04-17 06:56:44.099552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.099703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.099730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.593 qpair failed and we were unable to recover it. 00:30:39.593 [2024-04-17 06:56:44.099866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.099995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.100021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.593 qpair failed and we were unable to recover it. 00:30:39.593 [2024-04-17 06:56:44.100185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.100309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.100333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.593 qpair failed and we were unable to recover it. 00:30:39.593 [2024-04-17 06:56:44.100458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.100581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.100604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.593 qpair failed and we were unable to recover it. 00:30:39.593 [2024-04-17 06:56:44.100727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.100896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.100919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.593 qpair failed and we were unable to recover it. 00:30:39.593 [2024-04-17 06:56:44.101077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.101206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.101230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.593 qpair failed and we were unable to recover it. 00:30:39.593 [2024-04-17 06:56:44.101358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.101481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.101504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.593 qpair failed and we were unable to recover it. 00:30:39.593 [2024-04-17 06:56:44.101661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.101873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.101899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.593 qpair failed and we were unable to recover it. 00:30:39.593 [2024-04-17 06:56:44.102052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.102181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.102205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.593 qpair failed and we were unable to recover it. 00:30:39.593 [2024-04-17 06:56:44.102333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.593 [2024-04-17 06:56:44.102528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.102557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.594 qpair failed and we were unable to recover it. 00:30:39.594 [2024-04-17 06:56:44.102772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.102899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.102923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.594 qpair failed and we were unable to recover it. 00:30:39.594 [2024-04-17 06:56:44.103082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.103224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.103268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.594 qpair failed and we were unable to recover it. 00:30:39.594 [2024-04-17 06:56:44.103418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.103594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.103618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.594 qpair failed and we were unable to recover it. 00:30:39.594 [2024-04-17 06:56:44.103758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.103888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.103916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.594 qpair failed and we were unable to recover it. 00:30:39.594 [2024-04-17 06:56:44.104049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.104212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.104237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.594 qpair failed and we were unable to recover it. 00:30:39.594 [2024-04-17 06:56:44.104364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.104494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.104528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.594 qpair failed and we were unable to recover it. 00:30:39.594 [2024-04-17 06:56:44.104697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.104823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.104852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.594 qpair failed and we were unable to recover it. 00:30:39.594 [2024-04-17 06:56:44.105009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.105159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.105189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.594 qpair failed and we were unable to recover it. 00:30:39.594 [2024-04-17 06:56:44.105322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.105454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.105478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.594 qpair failed and we were unable to recover it. 00:30:39.594 [2024-04-17 06:56:44.105599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.105780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.105804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.594 qpair failed and we were unable to recover it. 00:30:39.594 [2024-04-17 06:56:44.105972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.106224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.106252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.594 qpair failed and we were unable to recover it. 00:30:39.594 [2024-04-17 06:56:44.106450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.106592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.106622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.594 qpair failed and we were unable to recover it. 00:30:39.594 [2024-04-17 06:56:44.106862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.107029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.107057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.594 qpair failed and we were unable to recover it. 00:30:39.594 [2024-04-17 06:56:44.107239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.107376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.107416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.594 qpair failed and we were unable to recover it. 00:30:39.594 [2024-04-17 06:56:44.107629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.107778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.107802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.594 qpair failed and we were unable to recover it. 00:30:39.594 [2024-04-17 06:56:44.107970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.108100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.108127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.594 qpair failed and we were unable to recover it. 00:30:39.594 [2024-04-17 06:56:44.108313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.108437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.108480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.594 qpair failed and we were unable to recover it. 00:30:39.594 [2024-04-17 06:56:44.108657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.108790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.108816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:39.594 qpair failed and we were unable to recover it. 00:30:39.594 [2024-04-17 06:56:44.109060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.109200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.594 [2024-04-17 06:56:44.109232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-04-17 06:56:44.540148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.540363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.540392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-04-17 06:56:44.540584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.540740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.540763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-04-17 06:56:44.540941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.541261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.541289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-04-17 06:56:44.541486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.541635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.541661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-04-17 06:56:44.541839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.542028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.542055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-04-17 06:56:44.542196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.542348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.542376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-04-17 06:56:44.542560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.542758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.542786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-04-17 06:56:44.542929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.543172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.543206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-04-17 06:56:44.543409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.543581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.543608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-04-17 06:56:44.543795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.543943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.543968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-04-17 06:56:44.544129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.544317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.544345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-04-17 06:56:44.544488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.544626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.544653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-04-17 06:56:44.544835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.545040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.545067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-04-17 06:56:44.545265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.545437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.545465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-04-17 06:56:44.545661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.545799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.545827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-04-17 06:56:44.546039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.546184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.546213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-04-17 06:56:44.546394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.546546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.546588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-04-17 06:56:44.546764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.546931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.546959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-04-17 06:56:44.547107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.547293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.547322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-04-17 06:56:44.547455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.547591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.547619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-04-17 06:56:44.547795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.547915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.547940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-04-17 06:56:44.548089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.548258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.548287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-04-17 06:56:44.548436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-04-17 06:56:44.548604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.548632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-04-17 06:56:44.548845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.548990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.549015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-04-17 06:56:44.549135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.549291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.549317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-04-17 06:56:44.549517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.549653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.549679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-04-17 06:56:44.549838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.550032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.550057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-04-17 06:56:44.550185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.550340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.550366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-04-17 06:56:44.550549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.550705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.550730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-04-17 06:56:44.550927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.551086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.551111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-04-17 06:56:44.551292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.551447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.551475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-04-17 06:56:44.551720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.551869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.551895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-04-17 06:56:44.552081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.552241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.552267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-04-17 06:56:44.552464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.552625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.552653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-04-17 06:56:44.552809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.552962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.552987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-04-17 06:56:44.553166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.553377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.553407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-04-17 06:56:44.553539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.553690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.553715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-04-17 06:56:44.553862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.554036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.554064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-04-17 06:56:44.554231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.554455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.554480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-04-17 06:56:44.554745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.554960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.555011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-04-17 06:56:44.555210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.555408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.555436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-04-17 06:56:44.555605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.555791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.555816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-04-17 06:56:44.555949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.556103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.556129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-04-17 06:56:44.556298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.556467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.556495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-04-17 06:56:44.556695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.556817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.556857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-04-17 06:56:44.557042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.557219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.557250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-04-17 06:56:44.557409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.557579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.557605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-04-17 06:56:44.557934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.558171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.558207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-04-17 06:56:44.558418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.558664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.558724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-04-17 06:56:44.558911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.559080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.559107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-04-17 06:56:44.559245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.559433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-04-17 06:56:44.559475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 123213 Killed "${NVMF_APP[@]}" "$@" 00:30:40.175 [2024-04-17 06:56:44.559724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.559951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.559979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-04-17 06:56:44.560165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 06:56:44 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:30:40.175 [2024-04-17 06:56:44.560355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.560381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 06:56:44 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:40.175 [2024-04-17 06:56:44.560572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 06:56:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:30:40.175 [2024-04-17 06:56:44.560827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 06:56:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:40.175 [2024-04-17 06:56:44.560884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 06:56:44 -- common/autotest_common.sh@10 -- # set +x 00:30:40.175 [2024-04-17 06:56:44.561090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.561262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.561295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-04-17 06:56:44.561447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.561616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.561644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-04-17 06:56:44.561791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.561949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.561989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-04-17 06:56:44.562156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.562427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.562483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-04-17 06:56:44.562690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.562875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.562900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-04-17 06:56:44.563073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.563241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.563270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-04-17 06:56:44.563411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.563576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.563603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-04-17 06:56:44.563785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.563957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.563984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-04-17 06:56:44.564150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.564333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.564361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-04-17 06:56:44.564523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.564785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.564838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-04-17 06:56:44.565043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.565252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.565281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 06:56:44 -- nvmf/common.sh@470 -- # nvmfpid=123712 00:30:40.175 06:56:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:40.175 06:56:44 -- nvmf/common.sh@471 -- # waitforlisten 123712 00:30:40.175 [2024-04-17 06:56:44.565472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.565642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 06:56:44 -- common/autotest_common.sh@817 -- # '[' -z 123712 ']' 00:30:40.175 [2024-04-17 06:56:44.565673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 06:56:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:40.175 [2024-04-17 06:56:44.565855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 06:56:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:40.175 [2024-04-17 06:56:44.566013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.566043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 06:56:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:40.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:40.175 [2024-04-17 06:56:44.566212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 06:56:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:40.175 [2024-04-17 06:56:44.566364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.566393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.175 06:56:44 -- common/autotest_common.sh@10 -- # set +x 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-04-17 06:56:44.566579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.566748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.566775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-04-17 06:56:44.566938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.567098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.567126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-04-17 06:56:44.567299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.567495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.567534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-04-17 06:56:44.567753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.567951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.567981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-04-17 06:56:44.568157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.568319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.568362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-04-17 06:56:44.568573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.568838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.568891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-04-17 06:56:44.569089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.569237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.569265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-04-17 06:56:44.569502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.569746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-04-17 06:56:44.569794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-04-17 06:56:44.570047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.570196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.570235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-04-17 06:56:44.570410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.570668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.570722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-04-17 06:56:44.570921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.571062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.571091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-04-17 06:56:44.571296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.571435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.571476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-04-17 06:56:44.571674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.571835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.571860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-04-17 06:56:44.572036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.572219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.572245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-04-17 06:56:44.572544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.572731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.572759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-04-17 06:56:44.572947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.573092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.573116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-04-17 06:56:44.573256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.573416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.573458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-04-17 06:56:44.573641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.573838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.573899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-04-17 06:56:44.574076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.574239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.574268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-04-17 06:56:44.574488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.574645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.574670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-04-17 06:56:44.574821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.574968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.574992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-04-17 06:56:44.575168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.575376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.575404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-04-17 06:56:44.575551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.575728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.575796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-04-17 06:56:44.576011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.576201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.576244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-04-17 06:56:44.576430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.576599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.576639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-04-17 06:56:44.576823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.577054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.577079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-04-17 06:56:44.577276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.577556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.577604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-04-17 06:56:44.577894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.578130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.578156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-04-17 06:56:44.578388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.578539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.578565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-04-17 06:56:44.578795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.578982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.579006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-04-17 06:56:44.579198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.579382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.579410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-04-17 06:56:44.579680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.579882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.579908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-04-17 06:56:44.580137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.580367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.580392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-04-17 06:56:44.580599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.580823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.580848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-04-17 06:56:44.580969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.581095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.581120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-04-17 06:56:44.581357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.581507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.581544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-04-17 06:56:44.581779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.582061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-04-17 06:56:44.582113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.177 [2024-04-17 06:56:44.582321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.582556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.582607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-04-17 06:56:44.582810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.583057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.583112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-04-17 06:56:44.583325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.583509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.583575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-04-17 06:56:44.583824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.584004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.584034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-04-17 06:56:44.584210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.584360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.584388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-04-17 06:56:44.584561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.584734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.584762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-04-17 06:56:44.584935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.585112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.585140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-04-17 06:56:44.585357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.585485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.585510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-04-17 06:56:44.585672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.585839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.585867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-04-17 06:56:44.586046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.586218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.586247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-04-17 06:56:44.586422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.586614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.586640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-04-17 06:56:44.586882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.587084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.587112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-04-17 06:56:44.587310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.587493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.587518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-04-17 06:56:44.587680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.587863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.587890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-04-17 06:56:44.588063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.588202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.588232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-04-17 06:56:44.588400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.588557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.588600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-04-17 06:56:44.588771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.588976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.589004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-04-17 06:56:44.589180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.589318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.589346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-04-17 06:56:44.589560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.589813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.589843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-04-17 06:56:44.590053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.590201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.590238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-04-17 06:56:44.590411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.590549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.590577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-04-17 06:56:44.590774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.590910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.590937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-04-17 06:56:44.591105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.591273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-04-17 06:56:44.591299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.178 [2024-04-17 06:56:44.591449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.591637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.591664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-04-17 06:56:44.591839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.592040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.592068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-04-17 06:56:44.592231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.592408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.592436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-04-17 06:56:44.592618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.593284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.593319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-04-17 06:56:44.593511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.593639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.593682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-04-17 06:56:44.593857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.594031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.594056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-04-17 06:56:44.594214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.594377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.594403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-04-17 06:56:44.594624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.594745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.594770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-04-17 06:56:44.594924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.595048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.595074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-04-17 06:56:44.595283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.595444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.595477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-04-17 06:56:44.595639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.595795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.595820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-04-17 06:56:44.595942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.596196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.596226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-04-17 06:56:44.596438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.596604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.596629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-04-17 06:56:44.596786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.596981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.597006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-04-17 06:56:44.597182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.597368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.597393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-04-17 06:56:44.597583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.597761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.597786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-04-17 06:56:44.597946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.598108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.598136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-04-17 06:56:44.598308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.598460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.598486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-04-17 06:56:44.598637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.598835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.598863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-04-17 06:56:44.599063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.599247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.599273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-04-17 06:56:44.599424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.599629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.599670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-04-17 06:56:44.599890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.600069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.600097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-04-17 06:56:44.600286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.600437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.600474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-04-17 06:56:44.600678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.600814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.600839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-04-17 06:56:44.600995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.601198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.601242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-04-17 06:56:44.601363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.601540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.601568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-04-17 06:56:44.601793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.602028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.602065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-04-17 06:56:44.602266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.602420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.602446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-04-17 06:56:44.602655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.602812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-04-17 06:56:44.602853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.179 [2024-04-17 06:56:44.602997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.603148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.603191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-04-17 06:56:44.603378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.603545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.603571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-04-17 06:56:44.603695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.603900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.603929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-04-17 06:56:44.604104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.604281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.604307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-04-17 06:56:44.604431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.604594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.604620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-04-17 06:56:44.604777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.604950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.604978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-04-17 06:56:44.605134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.605299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.605325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-04-17 06:56:44.605451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.605625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.605676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-04-17 06:56:44.605842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.606050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.606078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-04-17 06:56:44.606269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.606394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.606419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-04-17 06:56:44.606577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.606754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.606781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-04-17 06:56:44.606953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.607072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.607098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-04-17 06:56:44.607328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.607459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.607505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-04-17 06:56:44.607679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.607817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.607845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-04-17 06:56:44.608041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.608235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.608261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-04-17 06:56:44.608412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.608582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.608609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-04-17 06:56:44.608755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.608923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.608951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-04-17 06:56:44.609153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.609384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.609410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-04-17 06:56:44.609601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.609729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.609754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-04-17 06:56:44.609911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.610041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.610067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-04-17 06:56:44.610283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.610497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.610529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-04-17 06:56:44.610736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.610878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.610905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-04-17 06:56:44.611075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.611234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.611259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-04-17 06:56:44.611444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.611622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.611652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-04-17 06:56:44.611827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.612001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.612028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-04-17 06:56:44.612255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.612435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.612460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-04-17 06:56:44.612658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.612808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.612833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-04-17 06:56:44.612993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.613075] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:30:40.179 [2024-04-17 06:56:44.613127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-04-17 06:56:44.613185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 wit[2024-04-17 06:56:44.613185] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:40.179 h addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.180 [2024-04-17 06:56:44.613347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.613510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.613533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-04-17 06:56:44.613663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.613794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.613819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-04-17 06:56:44.613970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.614112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.614140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-04-17 06:56:44.614296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.614457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.614492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-04-17 06:56:44.614644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.614804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.614834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-04-17 06:56:44.615044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.615182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.615207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-04-17 06:56:44.615373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.615547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.615575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-04-17 06:56:44.615749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.615925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.615953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-04-17 06:56:44.616148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.616313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.616339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-04-17 06:56:44.616505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.616720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.616767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-04-17 06:56:44.616969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.617128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.617158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-04-17 06:56:44.617334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.617497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.617522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-04-17 06:56:44.617692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.617898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.617952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-04-17 06:56:44.618116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.618303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.618329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-04-17 06:56:44.618461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.618693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.618741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-04-17 06:56:44.618939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.619105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.619132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-04-17 06:56:44.619326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.619463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.619488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-04-17 06:56:44.619657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.619858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.619890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-04-17 06:56:44.620066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.620266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.620292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-04-17 06:56:44.620422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.620590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.620640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-04-17 06:56:44.620842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.621014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.621041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-04-17 06:56:44.621187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.621366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.621394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-04-17 06:56:44.621581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.621772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.621798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-04-17 06:56:44.621941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.622065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.622091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-04-17 06:56:44.622285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.622443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.622468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-04-17 06:56:44.622679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.622818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.622845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-04-17 06:56:44.622992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.623182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.623210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-04-17 06:56:44.623377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.623583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-04-17 06:56:44.623611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-04-17 06:56:44.623782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.623977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.624004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-04-17 06:56:44.624181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.624371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.624396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-04-17 06:56:44.624583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.624743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.624770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-04-17 06:56:44.624947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.625099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.625124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-04-17 06:56:44.625270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.625403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.625427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-04-17 06:56:44.625619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.625778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.625803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-04-17 06:56:44.625981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.626812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.626846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-04-17 06:56:44.627050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.628016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.628050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-04-17 06:56:44.628278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.628999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.629031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-04-17 06:56:44.629233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.629929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.629961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-04-17 06:56:44.630171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.630361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.630386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-04-17 06:56:44.630553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.630677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.630719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-04-17 06:56:44.630891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.631072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.631097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-04-17 06:56:44.631280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.631421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.631448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-04-17 06:56:44.631630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.631805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.631829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-04-17 06:56:44.631998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.632169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.632202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-04-17 06:56:44.632376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.632535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.632559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-04-17 06:56:44.632735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.632892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.632919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-04-17 06:56:44.633096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.633229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.633254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-04-17 06:56:44.633377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.633561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.633600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-04-17 06:56:44.633746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.633918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.633942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-04-17 06:56:44.634071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.634208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.634233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-04-17 06:56:44.634365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.634513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.634538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-04-17 06:56:44.634698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.634855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.634895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-04-17 06:56:44.635074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.635254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.635282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-04-17 06:56:44.635471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.635682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.635730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-04-17 06:56:44.635937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.636674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.636706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-04-17 06:56:44.636908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.637626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.637658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-04-17 06:56:44.637892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.638047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-04-17 06:56:44.638088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-04-17 06:56:44.638270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.638453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.638486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-04-17 06:56:44.638652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.638774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.638799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-04-17 06:56:44.638928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.639085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.639110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-04-17 06:56:44.639242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.639402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.639427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-04-17 06:56:44.639623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.639754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.639781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-04-17 06:56:44.639946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.640113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.640138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-04-17 06:56:44.640301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.640433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.640457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-04-17 06:56:44.640622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.640869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.640916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-04-17 06:56:44.641116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.641302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.641327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-04-17 06:56:44.641476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.641687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.641711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-04-17 06:56:44.641877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.642098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.642122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-04-17 06:56:44.642285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.642420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.642445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-04-17 06:56:44.642632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.642874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.642921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-04-17 06:56:44.643099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.643232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.643261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-04-17 06:56:44.643402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.643561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.643604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-04-17 06:56:44.643768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.643954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.643979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-04-17 06:56:44.644151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.644344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.644368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-04-17 06:56:44.644519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.644671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.644698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-04-17 06:56:44.644895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.645053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.645077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-04-17 06:56:44.645245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.645373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.645398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-04-17 06:56:44.645565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.645687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.645712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-04-17 06:56:44.645858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.646010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.646035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-04-17 06:56:44.646220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.646368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.646393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-04-17 06:56:44.646539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.646669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.646716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-04-17 06:56:44.646898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-04-17 06:56:44.647041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.647071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-04-17 06:56:44.647236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.647390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.647414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-04-17 06:56:44.647606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.647767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.647808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-04-17 06:56:44.647976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.648180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.648208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-04-17 06:56:44.648357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.648489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.648513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-04-17 06:56:44.648636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.648768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.648792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-04-17 06:56:44.648939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.649093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.649119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-04-17 06:56:44.649311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.649437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.649476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-04-17 06:56:44.649650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.649794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.649822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-04-17 06:56:44.649997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.650179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.650206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-04-17 06:56:44.650340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.650507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.650534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-04-17 06:56:44.650731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.650900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.650929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-04-17 06:56:44.651130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.651300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.651327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.183 EAL: No free 2048 kB hugepages reported on node 1 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-04-17 06:56:44.651476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.651652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.651696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-04-17 06:56:44.651881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.652053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.652081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-04-17 06:56:44.652237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.652359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.652383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-04-17 06:56:44.652535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.652689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.652713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-04-17 06:56:44.652864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.653024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.653054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-04-17 06:56:44.653205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.653353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.653378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-04-17 06:56:44.653508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.653672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.653713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-04-17 06:56:44.653877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.654029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.654058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-04-17 06:56:44.654237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.654399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.654425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-04-17 06:56:44.654590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.654770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.654794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-04-17 06:56:44.654939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.655069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.655094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-04-17 06:56:44.655260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.655387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.655412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-04-17 06:56:44.655576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.655774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.655798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-04-17 06:56:44.655956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.656105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.656130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-04-17 06:56:44.656276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.656400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.656424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-04-17 06:56:44.656590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.656712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-04-17 06:56:44.656737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.184 [2024-04-17 06:56:44.656886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.657038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.657062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-04-17 06:56:44.657210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.657334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.657359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-04-17 06:56:44.657491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.657678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.657702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-04-17 06:56:44.657840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.657969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.658004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-04-17 06:56:44.658167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.658303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.658327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-04-17 06:56:44.658449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.658577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.658602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-04-17 06:56:44.658761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.658917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.658942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-04-17 06:56:44.659096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.659251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.659278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-04-17 06:56:44.659403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.659521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.659544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-04-17 06:56:44.659733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.659855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.659879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-04-17 06:56:44.660041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.660160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.660190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-04-17 06:56:44.660332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.660458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.660494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-04-17 06:56:44.660648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.660769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.660793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-04-17 06:56:44.660929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.661077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.661101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-04-17 06:56:44.661266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.661399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.661423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-04-17 06:56:44.661606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.661772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.661796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-04-17 06:56:44.661929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.662083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.662106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-04-17 06:56:44.662246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.662400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.662423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-04-17 06:56:44.662586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.662741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.662766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-04-17 06:56:44.662922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.663049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.663084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-04-17 06:56:44.663242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.663380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.663406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-04-17 06:56:44.663577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.663759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.663784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-04-17 06:56:44.663942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.664099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.664123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-04-17 06:56:44.664262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.664410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.664435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-04-17 06:56:44.664608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.664800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.664824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-04-17 06:56:44.664975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.665105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.665131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-04-17 06:56:44.665278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.665450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.665475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-04-17 06:56:44.665637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.665772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.665796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-04-17 06:56:44.665926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-04-17 06:56:44.666049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.666073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-04-17 06:56:44.666255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.666388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.666413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-04-17 06:56:44.666581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.666701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.666725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-04-17 06:56:44.666987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.667108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.667132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-04-17 06:56:44.667295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.667426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.667451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-04-17 06:56:44.667615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.667768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.667795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-04-17 06:56:44.667954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.668109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.668134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-04-17 06:56:44.668283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.668442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.668467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-04-17 06:56:44.668598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.668775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.668799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-04-17 06:56:44.668954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.669070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.669094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-04-17 06:56:44.669229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.669358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.669384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-04-17 06:56:44.669534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.669656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.669682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-04-17 06:56:44.669808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.669960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.669985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-04-17 06:56:44.670109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.670258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.670283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-04-17 06:56:44.670408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.670564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.670589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-04-17 06:56:44.670750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.670880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.670904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-04-17 06:56:44.671035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.671205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.671231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-04-17 06:56:44.671362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.671487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.671511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-04-17 06:56:44.671672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.671853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.671877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-04-17 06:56:44.672036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.672227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.672252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-04-17 06:56:44.672413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.672608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.672633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-04-17 06:56:44.672796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.672972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.672996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-04-17 06:56:44.673154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.673293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.673318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-04-17 06:56:44.673479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.673668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.673693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-04-17 06:56:44.673815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.673971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.673995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-04-17 06:56:44.674128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.674300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.674326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-04-17 06:56:44.674481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.674636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.674660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-04-17 06:56:44.674787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.674968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.674992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-04-17 06:56:44.675111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.675252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.675278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-04-17 06:56:44.675401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-04-17 06:56:44.675553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.675577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-04-17 06:56:44.675734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.675870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.675896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-04-17 06:56:44.676026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.676185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.676210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-04-17 06:56:44.676347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.676471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.676496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-04-17 06:56:44.676632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.676797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.676821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-04-17 06:56:44.676954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.677081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.677108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-04-17 06:56:44.677275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.677404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.677429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-04-17 06:56:44.677601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.677788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.677812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-04-17 06:56:44.677983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.678106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.678130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-04-17 06:56:44.678290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.678419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.678444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-04-17 06:56:44.678610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.678770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.678796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-04-17 06:56:44.678959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.679109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.679134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-04-17 06:56:44.679267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.679402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.679426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-04-17 06:56:44.679606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.679753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.679777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-04-17 06:56:44.679937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.680089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.680114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-04-17 06:56:44.680250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.680394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.680419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-04-17 06:56:44.680596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.680718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.680742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-04-17 06:56:44.680903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.681025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.681050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-04-17 06:56:44.681237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.681364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.681388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-04-17 06:56:44.681550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.681681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.681705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-04-17 06:56:44.681859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.681995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.682020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-04-17 06:56:44.682157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.682314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.682338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-04-17 06:56:44.682533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.682657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.682681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-04-17 06:56:44.682841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.683018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.683042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-04-17 06:56:44.683165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.683325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.683358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-04-17 06:56:44.683516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.683683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.683708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-04-17 06:56:44.683895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.684043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.684067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-04-17 06:56:44.684234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.684391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-04-17 06:56:44.684417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.187 [2024-04-17 06:56:44.684581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.684739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.684766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-04-17 06:56:44.684922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.685052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.685077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-04-17 06:56:44.685239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.685400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.685427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-04-17 06:56:44.685561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.685709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.685733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-04-17 06:56:44.685885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.686029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.686060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-04-17 06:56:44.686206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.686362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.686386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-04-17 06:56:44.686569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.686753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.686792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-04-17 06:56:44.686916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.687067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.687091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-04-17 06:56:44.687215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.687346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.687371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-04-17 06:56:44.687494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.687652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.687676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-04-17 06:56:44.687836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.687865] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:40.187 [2024-04-17 06:56:44.688025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.688050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-04-17 06:56:44.688198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.688321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.688346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-04-17 06:56:44.688483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.688644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.688670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-04-17 06:56:44.688825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.688952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.688976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-04-17 06:56:44.689183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.689342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.689367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-04-17 06:56:44.689518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.689691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.689717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-04-17 06:56:44.689871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.690001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.690027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-04-17 06:56:44.690158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.690325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.690350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-04-17 06:56:44.690492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.690751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.690775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-04-17 06:56:44.690964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.691117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.691141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-04-17 06:56:44.691288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.691411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.691436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-04-17 06:56:44.691666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.691845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.691870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-04-17 06:56:44.692033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.692221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.692247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-04-17 06:56:44.692376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.692546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.692572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-04-17 06:56:44.692793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-04-17 06:56:44.692992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.693017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-04-17 06:56:44.693170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.693336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.693363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-04-17 06:56:44.693520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.693656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.693681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-04-17 06:56:44.693831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.693960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.693987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-04-17 06:56:44.694143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.694282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.694308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-04-17 06:56:44.694440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.694602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.694626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-04-17 06:56:44.694763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.694924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.694950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-04-17 06:56:44.695099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.695267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.695293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-04-17 06:56:44.695418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.695566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.695590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-04-17 06:56:44.695711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.695866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.695904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-04-17 06:56:44.696076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.696243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.696269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-04-17 06:56:44.696421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.696581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.696606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-04-17 06:56:44.696798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.696930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.696959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-04-17 06:56:44.697115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.697271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.697296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-04-17 06:56:44.697436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.697639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.697664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-04-17 06:56:44.697843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.697964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.697989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-04-17 06:56:44.698146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.698341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.698368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-04-17 06:56:44.698529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.698659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.698683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-04-17 06:56:44.698847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.698975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.698999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-04-17 06:56:44.699188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.699316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.699343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-04-17 06:56:44.699503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.699685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.699710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-04-17 06:56:44.699835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.699973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.699998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-04-17 06:56:44.700192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.700355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.700386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-04-17 06:56:44.700514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.700682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.700707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-04-17 06:56:44.700829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.701005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.701030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-04-17 06:56:44.701159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.701297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.701322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-04-17 06:56:44.701450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.701617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.701642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-04-17 06:56:44.701769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.701959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.701983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-04-17 06:56:44.702110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.702250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-04-17 06:56:44.702276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-04-17 06:56:44.702436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.702586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.702611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-04-17 06:56:44.702745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.702870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.702895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-04-17 06:56:44.703050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.703190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.703215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-04-17 06:56:44.703399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.703522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.703551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-04-17 06:56:44.703735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.703898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.703923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-04-17 06:56:44.704082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.704247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.704273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-04-17 06:56:44.704434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.704631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.704656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-04-17 06:56:44.704788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.704913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.704938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-04-17 06:56:44.705119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.705286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.705311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-04-17 06:56:44.705466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.705618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.705643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-04-17 06:56:44.705822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.705971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.705996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-04-17 06:56:44.706156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.706325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.706350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-04-17 06:56:44.706477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.706637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.706663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-04-17 06:56:44.706825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.706948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.706976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-04-17 06:56:44.707113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.707252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.707277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-04-17 06:56:44.707435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.707591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.707616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-04-17 06:56:44.707741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.707866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.707890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-04-17 06:56:44.708072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.708213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.708239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-04-17 06:56:44.708417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.708572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.708597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-04-17 06:56:44.708785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.708969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.708994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-04-17 06:56:44.709114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.709253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.709279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-04-17 06:56:44.709439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.709578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.709603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-04-17 06:56:44.709738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.709909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.709934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-04-17 06:56:44.710123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.710263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.710290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-04-17 06:56:44.710478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.710636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.710661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-04-17 06:56:44.710827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.710980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.711005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-04-17 06:56:44.711164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.711325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.711351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-04-17 06:56:44.711479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.711635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.711661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-04-17 06:56:44.711856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.712045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-04-17 06:56:44.712069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.190 [2024-04-17 06:56:44.712200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.712327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.712352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-04-17 06:56:44.712509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.712628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.712653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-04-17 06:56:44.712815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.712940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.712965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-04-17 06:56:44.713151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.713294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.713319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-04-17 06:56:44.713475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.713666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.713691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-04-17 06:56:44.713856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.714040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.714065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-04-17 06:56:44.714224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.714380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.714406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-04-17 06:56:44.714583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.714739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.714765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-04-17 06:56:44.714920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.715080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.715105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-04-17 06:56:44.715236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.715380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.715404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-04-17 06:56:44.715559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.715721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.715746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-04-17 06:56:44.715930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.716076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.716101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-04-17 06:56:44.716231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.716385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.716409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-04-17 06:56:44.716565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.716730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.716755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-04-17 06:56:44.716893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.717053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.717077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-04-17 06:56:44.717267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.717427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.717451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-04-17 06:56:44.717622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.717811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.717836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-04-17 06:56:44.717956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.718081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.718106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-04-17 06:56:44.718300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.718427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.718453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-04-17 06:56:44.718642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.718820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.718845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-04-17 06:56:44.719005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.719157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.719187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-04-17 06:56:44.719348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.719502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.719526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-04-17 06:56:44.719680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.719835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.719860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-04-17 06:56:44.719985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.720138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.720162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-04-17 06:56:44.720322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.720482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.720512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-04-17 06:56:44.720681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.720837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.720861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-04-17 06:56:44.720990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.721172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.721202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-04-17 06:56:44.721327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.721449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.721482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-04-17 06:56:44.721622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.721756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-04-17 06:56:44.721783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-04-17 06:56:44.721910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.722060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.722084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-04-17 06:56:44.722221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.722406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.722432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-04-17 06:56:44.722619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.722736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.722761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-04-17 06:56:44.722877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.723012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.723037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-04-17 06:56:44.723206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.723360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.723385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-04-17 06:56:44.723544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.723697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.723723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-04-17 06:56:44.723881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.724131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.724156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-04-17 06:56:44.724320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.724473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.724512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-04-17 06:56:44.724668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.724798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.724822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-04-17 06:56:44.725007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.725153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.725182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-04-17 06:56:44.725308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.725460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.725485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-04-17 06:56:44.725605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.725762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.725786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-04-17 06:56:44.725962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.726118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.726143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-04-17 06:56:44.726360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.726519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.726544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-04-17 06:56:44.726715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.726831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.726856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-04-17 06:56:44.727014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.727166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.727198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-04-17 06:56:44.727375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.727532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.727557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-04-17 06:56:44.727743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.727897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.727923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-04-17 06:56:44.728072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.728233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.728260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-04-17 06:56:44.728433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.728576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.728617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-04-17 06:56:44.728809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.728943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.728968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-04-17 06:56:44.729132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.729291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.729317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-04-17 06:56:44.729448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.729605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.729630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-04-17 06:56:44.729762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.729920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.729946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-04-17 06:56:44.730077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-04-17 06:56:44.730250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.730276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.192 qpair failed and we were unable to recover it. 00:30:40.192 [2024-04-17 06:56:44.730432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.730623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.730648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.192 qpair failed and we were unable to recover it. 00:30:40.192 [2024-04-17 06:56:44.730802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.730948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.730976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.192 qpair failed and we were unable to recover it. 00:30:40.192 [2024-04-17 06:56:44.731168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.731306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.731331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.192 qpair failed and we were unable to recover it. 00:30:40.192 [2024-04-17 06:56:44.731455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.731603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.731627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.192 qpair failed and we were unable to recover it. 00:30:40.192 [2024-04-17 06:56:44.731820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.731971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.731996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.192 qpair failed and we were unable to recover it. 00:30:40.192 [2024-04-17 06:56:44.732117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.732247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.732273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.192 qpair failed and we were unable to recover it. 00:30:40.192 [2024-04-17 06:56:44.732400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.732558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.732582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.192 qpair failed and we were unable to recover it. 00:30:40.192 [2024-04-17 06:56:44.732709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.732834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.732859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.192 qpair failed and we were unable to recover it. 00:30:40.192 [2024-04-17 06:56:44.732984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.733165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.733198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.192 qpair failed and we were unable to recover it. 00:30:40.192 [2024-04-17 06:56:44.733323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.733459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.733484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.192 qpair failed and we were unable to recover it. 00:30:40.192 [2024-04-17 06:56:44.733603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.733757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.733781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.192 qpair failed and we were unable to recover it. 00:30:40.192 [2024-04-17 06:56:44.733984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.734144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.734183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.192 qpair failed and we were unable to recover it. 00:30:40.192 [2024-04-17 06:56:44.734313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.734467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.734491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.192 qpair failed and we were unable to recover it. 00:30:40.192 [2024-04-17 06:56:44.734674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.734831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.734855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.192 qpair failed and we were unable to recover it. 00:30:40.192 [2024-04-17 06:56:44.734974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.735148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.735184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.192 qpair failed and we were unable to recover it. 00:30:40.192 [2024-04-17 06:56:44.735314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.735466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.735492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.192 qpair failed and we were unable to recover it. 00:30:40.192 [2024-04-17 06:56:44.735646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.735801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.735825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.192 qpair failed and we were unable to recover it. 00:30:40.192 [2024-04-17 06:56:44.735988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.736111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.736135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.192 qpair failed and we were unable to recover it. 00:30:40.192 [2024-04-17 06:56:44.736280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.736403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.736428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.192 qpair failed and we were unable to recover it. 00:30:40.192 [2024-04-17 06:56:44.736586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.736714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.736738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.192 qpair failed and we were unable to recover it. 00:30:40.192 [2024-04-17 06:56:44.736967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.737125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.737150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.192 qpair failed and we were unable to recover it. 00:30:40.192 [2024-04-17 06:56:44.737281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.737452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.737481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.192 qpair failed and we were unable to recover it. 00:30:40.192 [2024-04-17 06:56:44.737635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.737796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.737820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.192 qpair failed and we were unable to recover it. 00:30:40.192 [2024-04-17 06:56:44.738004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.738136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.738160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.192 qpair failed and we were unable to recover it. 00:30:40.192 [2024-04-17 06:56:44.738305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.738457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.738482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.192 qpair failed and we were unable to recover it. 00:30:40.192 [2024-04-17 06:56:44.738607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.738767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.738792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.192 qpair failed and we were unable to recover it. 00:30:40.192 [2024-04-17 06:56:44.738921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.739043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.739069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.192 qpair failed and we were unable to recover it. 00:30:40.192 [2024-04-17 06:56:44.739219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.739367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.739392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.192 qpair failed and we were unable to recover it. 00:30:40.192 [2024-04-17 06:56:44.739518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.192 [2024-04-17 06:56:44.739672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.739698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.193 qpair failed and we were unable to recover it. 00:30:40.193 [2024-04-17 06:56:44.739868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.740026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.740050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.193 qpair failed and we were unable to recover it. 00:30:40.193 [2024-04-17 06:56:44.740203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.740386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.740411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.193 qpair failed and we were unable to recover it. 00:30:40.193 [2024-04-17 06:56:44.740540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.740740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.740765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.193 qpair failed and we were unable to recover it. 00:30:40.193 [2024-04-17 06:56:44.740931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.741116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.741141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.193 qpair failed and we were unable to recover it. 00:30:40.193 [2024-04-17 06:56:44.741275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.741502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.741526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.193 qpair failed and we were unable to recover it. 00:30:40.193 [2024-04-17 06:56:44.741691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.741869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.741894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.193 qpair failed and we were unable to recover it. 00:30:40.193 [2024-04-17 06:56:44.742061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.742217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.742242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.193 qpair failed and we were unable to recover it. 00:30:40.193 [2024-04-17 06:56:44.742368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.742519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.742544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.193 qpair failed and we were unable to recover it. 00:30:40.193 [2024-04-17 06:56:44.742700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.742856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.742880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.193 qpair failed and we were unable to recover it. 00:30:40.193 [2024-04-17 06:56:44.743010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.743190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.743217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.193 qpair failed and we were unable to recover it. 00:30:40.193 [2024-04-17 06:56:44.743341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.743531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.743555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.193 qpair failed and we were unable to recover it. 00:30:40.193 [2024-04-17 06:56:44.743685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.743841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.743867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.193 qpair failed and we were unable to recover it. 00:30:40.193 [2024-04-17 06:56:44.744028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.744205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.744231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.193 qpair failed and we were unable to recover it. 00:30:40.193 [2024-04-17 06:56:44.744435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.744595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.744620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.193 qpair failed and we were unable to recover it. 00:30:40.193 [2024-04-17 06:56:44.744777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.744902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.744926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.193 qpair failed and we were unable to recover it. 00:30:40.193 [2024-04-17 06:56:44.745055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.745284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.745310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.193 qpair failed and we were unable to recover it. 00:30:40.193 [2024-04-17 06:56:44.745480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.745632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.745656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.193 qpair failed and we were unable to recover it. 00:30:40.193 [2024-04-17 06:56:44.745805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.745966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.745990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.193 qpair failed and we were unable to recover it. 00:30:40.193 [2024-04-17 06:56:44.746147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.746293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.746317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.193 qpair failed and we were unable to recover it. 00:30:40.193 [2024-04-17 06:56:44.746485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.746648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.746673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.193 qpair failed and we were unable to recover it. 00:30:40.193 [2024-04-17 06:56:44.746795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.747010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.747033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.193 qpair failed and we were unable to recover it. 00:30:40.193 [2024-04-17 06:56:44.747232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.747390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.747414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.193 qpair failed and we were unable to recover it. 00:30:40.193 [2024-04-17 06:56:44.747601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.747722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.747746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.193 qpair failed and we were unable to recover it. 00:30:40.193 [2024-04-17 06:56:44.747934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.748067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.748091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.193 qpair failed and we were unable to recover it. 00:30:40.193 [2024-04-17 06:56:44.748237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.748374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.748399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.193 qpair failed and we were unable to recover it. 00:30:40.193 [2024-04-17 06:56:44.748558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.748680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.748704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.193 qpair failed and we were unable to recover it. 00:30:40.193 [2024-04-17 06:56:44.748849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.749005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.749031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.193 qpair failed and we were unable to recover it. 00:30:40.193 [2024-04-17 06:56:44.749209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.749380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.749406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.193 qpair failed and we were unable to recover it. 00:30:40.193 [2024-04-17 06:56:44.749549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.193 [2024-04-17 06:56:44.749732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.749756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.194 qpair failed and we were unable to recover it. 00:30:40.194 [2024-04-17 06:56:44.749898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.750024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.750049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.194 qpair failed and we were unable to recover it. 00:30:40.194 [2024-04-17 06:56:44.750212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.750364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.750391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.194 qpair failed and we were unable to recover it. 00:30:40.194 [2024-04-17 06:56:44.750523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.750672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.750697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.194 qpair failed and we were unable to recover it. 00:30:40.194 [2024-04-17 06:56:44.750829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.750981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.751007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.194 qpair failed and we were unable to recover it. 00:30:40.194 [2024-04-17 06:56:44.751127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.751261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.751288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.194 qpair failed and we were unable to recover it. 00:30:40.194 [2024-04-17 06:56:44.751468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.751649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.751674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.194 qpair failed and we were unable to recover it. 00:30:40.194 [2024-04-17 06:56:44.751858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.752013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.752038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.194 qpair failed and we were unable to recover it. 00:30:40.194 [2024-04-17 06:56:44.752224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.752394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.752419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.194 qpair failed and we were unable to recover it. 00:30:40.194 [2024-04-17 06:56:44.752565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.752718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.752743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.194 qpair failed and we were unable to recover it. 00:30:40.194 [2024-04-17 06:56:44.752894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.753042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.753067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.194 qpair failed and we were unable to recover it. 00:30:40.194 [2024-04-17 06:56:44.753234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.753381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.753406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.194 qpair failed and we were unable to recover it. 00:30:40.194 [2024-04-17 06:56:44.753571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.753701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.753726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.194 qpair failed and we were unable to recover it. 00:30:40.194 [2024-04-17 06:56:44.753852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.753974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.753998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.194 qpair failed and we were unable to recover it. 00:30:40.194 [2024-04-17 06:56:44.754150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.754283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.754308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.194 qpair failed and we were unable to recover it. 00:30:40.194 [2024-04-17 06:56:44.754437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.754558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.754588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.194 qpair failed and we were unable to recover it. 00:30:40.194 [2024-04-17 06:56:44.754743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.754885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.754909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.194 qpair failed and we were unable to recover it. 00:30:40.194 [2024-04-17 06:56:44.755060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.755221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.755246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.194 qpair failed and we were unable to recover it. 00:30:40.194 [2024-04-17 06:56:44.755424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.755573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.755597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.194 qpair failed and we were unable to recover it. 00:30:40.194 [2024-04-17 06:56:44.755753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.755879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.755905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.194 qpair failed and we were unable to recover it. 00:30:40.194 [2024-04-17 06:56:44.756089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.756270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.756296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.194 qpair failed and we were unable to recover it. 00:30:40.194 [2024-04-17 06:56:44.756424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.756589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.756613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.194 qpair failed and we were unable to recover it. 00:30:40.194 [2024-04-17 06:56:44.756771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.756951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.756975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.194 qpair failed and we were unable to recover it. 00:30:40.194 [2024-04-17 06:56:44.757102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.757250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.757276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.194 qpair failed and we were unable to recover it. 00:30:40.194 [2024-04-17 06:56:44.757432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.757587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.757612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.194 qpair failed and we were unable to recover it. 00:30:40.194 [2024-04-17 06:56:44.757793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.757909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.757938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.194 qpair failed and we were unable to recover it. 00:30:40.194 [2024-04-17 06:56:44.758093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.758240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.758265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.194 qpair failed and we were unable to recover it. 00:30:40.194 [2024-04-17 06:56:44.758453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.758613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.758638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.194 qpair failed and we were unable to recover it. 00:30:40.194 [2024-04-17 06:56:44.758796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.758954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.758980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.194 qpair failed and we were unable to recover it. 00:30:40.194 [2024-04-17 06:56:44.759127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.194 [2024-04-17 06:56:44.759289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.759315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.195 qpair failed and we were unable to recover it. 00:30:40.195 [2024-04-17 06:56:44.759438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.759591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.759616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.195 qpair failed and we were unable to recover it. 00:30:40.195 [2024-04-17 06:56:44.759770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.759947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.759972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.195 qpair failed and we were unable to recover it. 00:30:40.195 [2024-04-17 06:56:44.760128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.760303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.760329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.195 qpair failed and we were unable to recover it. 00:30:40.195 [2024-04-17 06:56:44.760468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.760618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.760643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.195 qpair failed and we were unable to recover it. 00:30:40.195 [2024-04-17 06:56:44.760800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.760950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.760975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.195 qpair failed and we were unable to recover it. 00:30:40.195 [2024-04-17 06:56:44.761160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.761298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.761323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.195 qpair failed and we were unable to recover it. 00:30:40.195 [2024-04-17 06:56:44.761469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.761626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.761651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.195 qpair failed and we were unable to recover it. 00:30:40.195 [2024-04-17 06:56:44.761809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.761960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.761984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.195 qpair failed and we were unable to recover it. 00:30:40.195 [2024-04-17 06:56:44.762146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.762273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.762298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.195 qpair failed and we were unable to recover it. 00:30:40.195 [2024-04-17 06:56:44.762429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.762581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.762608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.195 qpair failed and we were unable to recover it. 00:30:40.195 [2024-04-17 06:56:44.762793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.762943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.762968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.195 qpair failed and we were unable to recover it. 00:30:40.195 [2024-04-17 06:56:44.763092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.763264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.763290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.195 qpair failed and we were unable to recover it. 00:30:40.195 [2024-04-17 06:56:44.763421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.763577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.763602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.195 qpair failed and we were unable to recover it. 00:30:40.195 [2024-04-17 06:56:44.763725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.763872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.763896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.195 qpair failed and we were unable to recover it. 00:30:40.195 [2024-04-17 06:56:44.764044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.764191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.764217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.195 qpair failed and we were unable to recover it. 00:30:40.195 [2024-04-17 06:56:44.764351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.764510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.764535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.195 qpair failed and we were unable to recover it. 00:30:40.195 [2024-04-17 06:56:44.764699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.764847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.764872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.195 qpair failed and we were unable to recover it. 00:30:40.195 [2024-04-17 06:56:44.765005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.765130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.765154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.195 qpair failed and we were unable to recover it. 00:30:40.195 [2024-04-17 06:56:44.765301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.765462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.765486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.195 qpair failed and we were unable to recover it. 00:30:40.195 [2024-04-17 06:56:44.765613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.765740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.765764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.195 qpair failed and we were unable to recover it. 00:30:40.195 [2024-04-17 06:56:44.765934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.766119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.766145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.195 qpair failed and we were unable to recover it. 00:30:40.195 [2024-04-17 06:56:44.766299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.766469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.766495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.195 qpair failed and we were unable to recover it. 00:30:40.195 [2024-04-17 06:56:44.766615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.766770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.766794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.195 qpair failed and we were unable to recover it. 00:30:40.195 [2024-04-17 06:56:44.766932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.767100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.767126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.195 qpair failed and we were unable to recover it. 00:30:40.195 [2024-04-17 06:56:44.767290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.767480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.767504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.195 qpair failed and we were unable to recover it. 00:30:40.195 [2024-04-17 06:56:44.767641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.767798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.767826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.195 qpair failed and we were unable to recover it. 00:30:40.195 [2024-04-17 06:56:44.767994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.768136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.195 [2024-04-17 06:56:44.768165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.195 qpair failed and we were unable to recover it. 00:30:40.195 [2024-04-17 06:56:44.768303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-04-17 06:56:44.768433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-04-17 06:56:44.768458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-04-17 06:56:44.768622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-04-17 06:56:44.768779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-04-17 06:56:44.768804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-04-17 06:56:44.768950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-04-17 06:56:44.769083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-04-17 06:56:44.769107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-04-17 06:56:44.769285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-04-17 06:56:44.769420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-04-17 06:56:44.769446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-04-17 06:56:44.769643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-04-17 06:56:44.769798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-04-17 06:56:44.769824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-04-17 06:56:44.769963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-04-17 06:56:44.770123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-04-17 06:56:44.770147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-04-17 06:56:44.770288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-04-17 06:56:44.770443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-04-17 06:56:44.770467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-04-17 06:56:44.770586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-04-17 06:56:44.770737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-04-17 06:56:44.770762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-04-17 06:56:44.770895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-04-17 06:56:44.771058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-04-17 06:56:44.771084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-04-17 06:56:44.771252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-04-17 06:56:44.771391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-04-17 06:56:44.771416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-04-17 06:56:44.771577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-04-17 06:56:44.771710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-04-17 06:56:44.771738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-04-17 06:56:44.771915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.772064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.772089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-04-17 06:56:44.772220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.772345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.772370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-04-17 06:56:44.772520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.772649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.772674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-04-17 06:56:44.772857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.773037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.773062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-04-17 06:56:44.773210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.773358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.773383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-04-17 06:56:44.773517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.773673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.773698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-04-17 06:56:44.773850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.774035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.774059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-04-17 06:56:44.774209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.774346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.774372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-04-17 06:56:44.774551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.774676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.774706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-04-17 06:56:44.774888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.775071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.775096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-04-17 06:56:44.775225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.775345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.775370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-04-17 06:56:44.775498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.775630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.775655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-04-17 06:56:44.775808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.775937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.775964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-04-17 06:56:44.776144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.776313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.776338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-04-17 06:56:44.776523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.776679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.776704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-04-17 06:56:44.776891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.777014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.777041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-04-17 06:56:44.777223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.777344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.777369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-04-17 06:56:44.777487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.777623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.777648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-04-17 06:56:44.777799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.777970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.777994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-04-17 06:56:44.778195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.778320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.778344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-04-17 06:56:44.778526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.778680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.778704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-04-17 06:56:44.778874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.779028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.779053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-04-17 06:56:44.779185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.779318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.779345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-04-17 06:56:44.779479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.779639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.779664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-04-17 06:56:44.779790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.779915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-04-17 06:56:44.779940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.469 [2024-04-17 06:56:44.780066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.780256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.780282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-04-17 06:56:44.780409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.780593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.780617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-04-17 06:56:44.780746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.780767] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:40.469 [2024-04-17 06:56:44.780802] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:40.469 [2024-04-17 06:56:44.780817] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:40.469 [2024-04-17 06:56:44.780830] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:40.469 [2024-04-17 06:56:44.780840] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:40.469 [2024-04-17 06:56:44.780870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.780901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-04-17 06:56:44.781051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.781071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:30:40.469 [2024-04-17 06:56:44.781203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.781119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:30:40.469 [2024-04-17 06:56:44.781229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-04-17 06:56:44.781196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:30:40.469 [2024-04-17 06:56:44.781212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:40.469 [2024-04-17 06:56:44.781379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.781559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.781584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-04-17 06:56:44.781736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.781898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.781923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-04-17 06:56:44.782054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.782236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.782261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-04-17 06:56:44.782404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.782569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.782593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-04-17 06:56:44.782775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.782944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.782968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-04-17 06:56:44.783094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.783248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.783274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-04-17 06:56:44.783432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.783561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.783585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-04-17 06:56:44.783714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.783872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.783896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-04-17 06:56:44.784033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.784158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.784194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-04-17 06:56:44.784349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.784498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.784523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-04-17 06:56:44.784715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.784847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.784872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-04-17 06:56:44.785032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.785195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.785220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-04-17 06:56:44.785346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.785491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.785515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-04-17 06:56:44.785667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.785828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.785853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-04-17 06:56:44.786007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.786124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.786148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-04-17 06:56:44.786295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.786455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.786479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-04-17 06:56:44.786624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.786743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.786767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-04-17 06:56:44.786938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.787086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.787111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-04-17 06:56:44.787284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.787452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-04-17 06:56:44.787477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-04-17 06:56:44.787614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.787749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.787773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-04-17 06:56:44.787908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.788040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.788064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-04-17 06:56:44.788218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.788374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.788399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-04-17 06:56:44.788535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.788662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.788687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-04-17 06:56:44.788851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.789003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.789028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-04-17 06:56:44.789199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.789335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.789361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-04-17 06:56:44.789495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.789664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.789688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-04-17 06:56:44.789815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.789973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.789997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-04-17 06:56:44.790134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.790341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.790366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-04-17 06:56:44.790504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.790629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.790655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-04-17 06:56:44.790813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.790952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.790976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-04-17 06:56:44.791109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.791288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.791314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-04-17 06:56:44.791480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.791610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.791636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-04-17 06:56:44.791761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.791920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.791944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-04-17 06:56:44.792093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.792247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.792273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-04-17 06:56:44.792401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.792544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.792571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-04-17 06:56:44.792707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.792835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.792859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-04-17 06:56:44.793060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.793191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.793217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-04-17 06:56:44.793420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.793569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.793593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-04-17 06:56:44.793753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.793890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.793914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-04-17 06:56:44.794054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.794184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.794210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-04-17 06:56:44.794336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.794481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.794506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-04-17 06:56:44.794636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.794777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.794801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-04-17 06:56:44.794932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.795096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.795120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-04-17 06:56:44.795261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.795390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.795415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-04-17 06:56:44.795559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-04-17 06:56:44.795703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.795727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-04-17 06:56:44.795898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.796035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.796059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-04-17 06:56:44.796187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.796313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.796339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-04-17 06:56:44.796476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.796595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.796620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-04-17 06:56:44.796748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.796887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.796915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-04-17 06:56:44.797076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.797195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.797221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-04-17 06:56:44.797351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.797527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.797552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-04-17 06:56:44.797680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.797807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.797832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-04-17 06:56:44.797953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.798105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.798129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-04-17 06:56:44.798269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.798404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.798429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-04-17 06:56:44.798617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.798737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.798761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-04-17 06:56:44.798949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.799099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.799124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-04-17 06:56:44.799256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.799386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.799410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-04-17 06:56:44.799567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.799714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.799738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-04-17 06:56:44.799871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.799992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.800020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-04-17 06:56:44.800141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.800276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.800301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-04-17 06:56:44.800424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.800554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.800579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-04-17 06:56:44.800713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.800895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.800920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-04-17 06:56:44.801049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.801171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.801202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-04-17 06:56:44.801327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.801453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.801478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-04-17 06:56:44.801623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.801743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.801769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-04-17 06:56:44.801928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.802064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.802088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-04-17 06:56:44.802216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.802375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.802399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-04-17 06:56:44.802555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.802684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.802708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-04-17 06:56:44.802832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.802953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.802977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-04-17 06:56:44.803165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.803306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.803330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-04-17 06:56:44.803455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.803595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.803619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-04-17 06:56:44.803748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.803931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-04-17 06:56:44.803955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-04-17 06:56:44.804084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.804232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.804257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-04-17 06:56:44.804379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.804518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.804549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-04-17 06:56:44.804691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.804828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.804853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-04-17 06:56:44.804994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.805116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.805141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-04-17 06:56:44.805292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.805427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.805456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-04-17 06:56:44.805602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.805743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.805767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-04-17 06:56:44.805898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.806016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.806041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-04-17 06:56:44.806207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.806337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.806362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-04-17 06:56:44.806492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.806661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.806687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-04-17 06:56:44.806815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.806949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.806974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-04-17 06:56:44.807110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.807261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.807286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-04-17 06:56:44.807433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.807597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.807622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-04-17 06:56:44.807752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.807919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.807943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-04-17 06:56:44.808064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.808216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.808241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-04-17 06:56:44.808366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.808493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.808519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-04-17 06:56:44.808656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.808819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.808843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-04-17 06:56:44.808970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.809124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.809148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-04-17 06:56:44.809311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.809446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-04-17 06:56:44.809470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.473 [2024-04-17 06:56:44.809606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.809746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.809770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-04-17 06:56:44.809928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.810056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.810080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-04-17 06:56:44.810247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.810372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.810396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-04-17 06:56:44.810557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.810722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.810747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-04-17 06:56:44.810888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.811059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.811084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-04-17 06:56:44.811219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.811345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.811370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-04-17 06:56:44.811493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.811630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.811654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-04-17 06:56:44.811790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.811915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.811939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-04-17 06:56:44.812072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.812237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.812263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-04-17 06:56:44.812425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.812590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.812615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-04-17 06:56:44.812768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.812893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.812917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-04-17 06:56:44.813070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.813199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.813253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-04-17 06:56:44.813379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.813508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.813532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-04-17 06:56:44.813696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.813865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.813890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-04-17 06:56:44.814054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.814196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.814221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-04-17 06:56:44.814364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.814530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.814554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-04-17 06:56:44.814690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.814821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.814845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-04-17 06:56:44.814968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.815111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.815135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-04-17 06:56:44.815282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.815438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.815463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-04-17 06:56:44.815597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.815758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.815786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-04-17 06:56:44.815915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.816069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.816094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-04-17 06:56:44.816264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.816408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.816432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-04-17 06:56:44.816598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.816808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.816832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-04-17 06:56:44.816966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.817099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.817126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-04-17 06:56:44.817276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.817436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.817464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-04-17 06:56:44.817598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.817769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.817794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-04-17 06:56:44.817942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.818069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-04-17 06:56:44.818093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.474 [2024-04-17 06:56:44.818299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.818433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.818463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-04-17 06:56:44.818612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.818746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.818772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-04-17 06:56:44.818908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.819065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.819089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-04-17 06:56:44.819260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.819420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.819444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-04-17 06:56:44.819658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.819790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.819816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-04-17 06:56:44.819952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.820082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.820106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-04-17 06:56:44.820282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.820430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.820454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-04-17 06:56:44.820609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.820735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.820759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-04-17 06:56:44.820883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.821037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.821061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-04-17 06:56:44.821199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.821332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.821358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-04-17 06:56:44.821488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.821620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.821645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-04-17 06:56:44.821803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.821950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.821974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-04-17 06:56:44.822102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.822274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.822300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-04-17 06:56:44.822453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.822580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.822605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-04-17 06:56:44.822763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.822886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.822911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-04-17 06:56:44.823076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.823214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.823240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-04-17 06:56:44.823380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.823541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.823565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-04-17 06:56:44.823691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.823850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.823875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-04-17 06:56:44.824014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.824168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.824200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-04-17 06:56:44.824324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.824471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.824496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-04-17 06:56:44.824621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.824780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.824804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-04-17 06:56:44.824922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.825044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.825068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-04-17 06:56:44.825280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.825398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.825423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-04-17 06:56:44.825559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.825699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.825723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-04-17 06:56:44.825849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.825978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.826003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-04-17 06:56:44.826133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.826299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.826323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-04-17 06:56:44.826458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.826583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.826608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-04-17 06:56:44.826731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-04-17 06:56:44.826897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.826922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-04-17 06:56:44.827125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.827258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.827283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-04-17 06:56:44.827418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.827606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.827630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-04-17 06:56:44.827752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.827954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.827978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-04-17 06:56:44.828134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.828303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.828329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-04-17 06:56:44.828477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.828623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.828647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-04-17 06:56:44.828798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.828925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.828949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-04-17 06:56:44.829070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.829232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.829258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-04-17 06:56:44.829380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.829505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.829530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-04-17 06:56:44.829686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.829802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.829826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-04-17 06:56:44.829992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.830120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.830145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-04-17 06:56:44.830315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.830448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.830473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-04-17 06:56:44.830593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.830714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.830744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-04-17 06:56:44.830867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.830995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.831019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-04-17 06:56:44.831197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.831323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.831347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-04-17 06:56:44.831469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.831587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.831611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-04-17 06:56:44.831736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.831874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.831903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-04-17 06:56:44.832053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.832211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.832235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-04-17 06:56:44.832366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.832487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.832511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-04-17 06:56:44.832646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.832770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.832795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-04-17 06:56:44.832949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.833092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.833116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-04-17 06:56:44.833245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.833391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.833416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-04-17 06:56:44.833546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.833666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.833690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-04-17 06:56:44.833816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.833944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.833968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-04-17 06:56:44.834088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.834217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.834242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-04-17 06:56:44.834377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.834507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.834532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-04-17 06:56:44.834688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.834801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.834825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-04-17 06:56:44.834959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.835094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-04-17 06:56:44.835118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-04-17 06:56:44.835257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.835389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.835413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-04-17 06:56:44.835548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.835698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.835721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-04-17 06:56:44.835872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.835994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.836021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-04-17 06:56:44.836142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.836289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.836314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-04-17 06:56:44.836444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.836564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.836589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-04-17 06:56:44.836771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.836887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.836911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-04-17 06:56:44.837032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.837216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.837242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-04-17 06:56:44.837402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.837532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.837556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-04-17 06:56:44.837716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.837841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.837865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-04-17 06:56:44.837988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.838116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.838145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-04-17 06:56:44.838306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.838435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.838459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-04-17 06:56:44.838595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.838725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.838749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-04-17 06:56:44.838879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.839008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.839032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-04-17 06:56:44.839157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.839285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.839312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-04-17 06:56:44.839442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.839583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.839607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-04-17 06:56:44.839783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.839922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.839946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-04-17 06:56:44.840065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.840188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.840213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-04-17 06:56:44.840344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.840466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.840492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-04-17 06:56:44.840608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.840761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.840785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-04-17 06:56:44.840936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.841106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.841131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-04-17 06:56:44.841267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.841392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.841417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-04-17 06:56:44.841539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.841661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.841686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-04-17 06:56:44.841849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.841994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.842018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-04-17 06:56:44.842136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.842314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.842340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-04-17 06:56:44.842462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-04-17 06:56:44.842598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.842623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-04-17 06:56:44.842760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.842881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.842905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-04-17 06:56:44.843114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.843242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.843268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-04-17 06:56:44.843389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.843515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.843540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-04-17 06:56:44.843708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.843862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.843886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-04-17 06:56:44.844032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.844158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.844188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-04-17 06:56:44.844324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.844450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.844474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-04-17 06:56:44.844595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.844716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.844740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-04-17 06:56:44.844863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.844982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.845006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-04-17 06:56:44.845160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.845319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.845344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-04-17 06:56:44.845468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.845667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.845692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-04-17 06:56:44.845838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.846040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.846065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-04-17 06:56:44.846213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.846354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.846379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-04-17 06:56:44.846501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.846640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.846666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-04-17 06:56:44.846800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.846980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.847004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-04-17 06:56:44.847158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.847281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.847315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-04-17 06:56:44.847437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.847591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.847615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-04-17 06:56:44.847752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.847872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.847896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-04-17 06:56:44.848036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.848155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.848185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-04-17 06:56:44.848311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.848463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.848487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-04-17 06:56:44.848620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.848776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.848800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-04-17 06:56:44.848928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.849091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.849115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-04-17 06:56:44.849269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.849406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-04-17 06:56:44.849430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.478 [2024-04-17 06:56:44.849563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.849717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.849741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-04-17 06:56:44.849878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.850006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.850032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-04-17 06:56:44.850197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.850318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.850348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-04-17 06:56:44.850483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.850634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.850659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-04-17 06:56:44.850793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.850922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.850947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-04-17 06:56:44.851083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.851222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.851247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-04-17 06:56:44.851370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.851492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.851516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-04-17 06:56:44.851719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.851842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.851867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-04-17 06:56:44.852036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.852156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.852219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-04-17 06:56:44.852358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.852558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.852582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-04-17 06:56:44.852722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.852853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.852877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-04-17 06:56:44.853017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.853146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.853172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-04-17 06:56:44.853326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.853447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.853471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-04-17 06:56:44.853596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.853726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.853759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-04-17 06:56:44.853908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.854087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.854112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-04-17 06:56:44.854240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.854365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.854390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-04-17 06:56:44.854563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.854714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.854739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-04-17 06:56:44.854859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.854991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.855016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-04-17 06:56:44.855195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.855324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.855349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-04-17 06:56:44.855476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.855608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.855632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-04-17 06:56:44.855799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.855961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.855986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-04-17 06:56:44.856148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.856279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.856304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-04-17 06:56:44.856428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.856597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.856622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-04-17 06:56:44.856785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.856907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.856931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-04-17 06:56:44.857098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.857227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.857252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-04-17 06:56:44.857409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.857644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-04-17 06:56:44.857668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-04-17 06:56:44.857789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.857923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.857947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.479 qpair failed and we were unable to recover it. 00:30:40.479 [2024-04-17 06:56:44.858071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.858210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.858236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.479 qpair failed and we were unable to recover it. 00:30:40.479 [2024-04-17 06:56:44.858401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.858527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.858551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.479 qpair failed and we were unable to recover it. 00:30:40.479 [2024-04-17 06:56:44.858697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.858827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.858851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.479 qpair failed and we were unable to recover it. 00:30:40.479 [2024-04-17 06:56:44.858968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.859113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.859137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.479 qpair failed and we were unable to recover it. 00:30:40.479 [2024-04-17 06:56:44.859278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.859398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.859423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.479 qpair failed and we were unable to recover it. 00:30:40.479 [2024-04-17 06:56:44.859592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.859749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.859773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.479 qpair failed and we were unable to recover it. 00:30:40.479 [2024-04-17 06:56:44.859888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.860021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.860046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.479 qpair failed and we were unable to recover it. 00:30:40.479 [2024-04-17 06:56:44.860182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.860318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.860343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.479 qpair failed and we were unable to recover it. 00:30:40.479 [2024-04-17 06:56:44.860493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.860634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.860659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.479 qpair failed and we were unable to recover it. 00:30:40.479 [2024-04-17 06:56:44.860819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.860976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.861000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.479 qpair failed and we were unable to recover it. 00:30:40.479 [2024-04-17 06:56:44.861121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.861324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.861349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.479 qpair failed and we were unable to recover it. 00:30:40.479 [2024-04-17 06:56:44.861498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.861623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.861647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.479 qpair failed and we were unable to recover it. 00:30:40.479 [2024-04-17 06:56:44.861783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.861901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.861926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.479 qpair failed and we were unable to recover it. 00:30:40.479 [2024-04-17 06:56:44.862086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.862243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.862268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.479 qpair failed and we were unable to recover it. 00:30:40.479 [2024-04-17 06:56:44.862419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.862610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.862635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.479 qpair failed and we were unable to recover it. 00:30:40.479 [2024-04-17 06:56:44.862785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.862905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.862929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.479 qpair failed and we were unable to recover it. 00:30:40.479 [2024-04-17 06:56:44.863047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.863186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.863212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.479 qpair failed and we were unable to recover it. 00:30:40.479 [2024-04-17 06:56:44.863333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.863454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.863479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.479 qpair failed and we were unable to recover it. 00:30:40.479 [2024-04-17 06:56:44.863596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.863729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.863762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.479 qpair failed and we were unable to recover it. 00:30:40.479 [2024-04-17 06:56:44.863910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.864042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.864066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.479 qpair failed and we were unable to recover it. 00:30:40.479 [2024-04-17 06:56:44.864201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.864340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.864364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.479 qpair failed and we were unable to recover it. 00:30:40.479 [2024-04-17 06:56:44.864491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.864612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.864637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.479 qpair failed and we were unable to recover it. 00:30:40.479 [2024-04-17 06:56:44.864772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.864905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.864931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.479 qpair failed and we were unable to recover it. 00:30:40.479 [2024-04-17 06:56:44.865058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.865223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.865248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.479 qpair failed and we were unable to recover it. 00:30:40.479 [2024-04-17 06:56:44.865370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.865485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.479 [2024-04-17 06:56:44.865509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.479 qpair failed and we were unable to recover it. 00:30:40.480 [2024-04-17 06:56:44.865636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.865776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.865800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.480 qpair failed and we were unable to recover it. 00:30:40.480 [2024-04-17 06:56:44.865918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.866048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.866076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.480 qpair failed and we were unable to recover it. 00:30:40.480 [2024-04-17 06:56:44.866202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.866322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.866346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.480 qpair failed and we were unable to recover it. 00:30:40.480 [2024-04-17 06:56:44.866477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.866596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.866622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.480 qpair failed and we were unable to recover it. 00:30:40.480 [2024-04-17 06:56:44.866744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.866894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.866918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.480 qpair failed and we were unable to recover it. 00:30:40.480 [2024-04-17 06:56:44.867069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.867196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.867221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.480 qpair failed and we were unable to recover it. 00:30:40.480 [2024-04-17 06:56:44.867358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.867492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.867517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.480 qpair failed and we were unable to recover it. 00:30:40.480 [2024-04-17 06:56:44.867684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.867839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.867864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.480 qpair failed and we were unable to recover it. 00:30:40.480 [2024-04-17 06:56:44.868004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.868130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.868156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df8b0 with addr=10.0.0.2, port=4420 00:30:40.480 qpair failed and we were unable to recover it. 00:30:40.480 [2024-04-17 06:56:44.868347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.868517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.868554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.480 qpair failed and we were unable to recover it. 00:30:40.480 [2024-04-17 06:56:44.868693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.868827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.868852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.480 qpair failed and we were unable to recover it. 00:30:40.480 [2024-04-17 06:56:44.868986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.869113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.869139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.480 qpair failed and we were unable to recover it. 00:30:40.480 [2024-04-17 06:56:44.869318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.869448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.869483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.480 qpair failed and we were unable to recover it. 00:30:40.480 [2024-04-17 06:56:44.869613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.869734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.869758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.480 qpair failed and we were unable to recover it. 00:30:40.480 [2024-04-17 06:56:44.869892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.870056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.870080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.480 qpair failed and we were unable to recover it. 00:30:40.480 [2024-04-17 06:56:44.870205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.870332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.870358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.480 qpair failed and we were unable to recover it. 00:30:40.480 [2024-04-17 06:56:44.870508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.870675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.870700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.480 qpair failed and we were unable to recover it. 00:30:40.480 [2024-04-17 06:56:44.870820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.870944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.870968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.480 qpair failed and we were unable to recover it. 00:30:40.480 [2024-04-17 06:56:44.871125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.871267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.871292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.480 qpair failed and we were unable to recover it. 00:30:40.480 [2024-04-17 06:56:44.871418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.871552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.871577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.480 qpair failed and we were unable to recover it. 00:30:40.480 [2024-04-17 06:56:44.871714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.871899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.871923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.480 qpair failed and we were unable to recover it. 00:30:40.480 [2024-04-17 06:56:44.872063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.872197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.872222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.480 qpair failed and we were unable to recover it. 00:30:40.480 [2024-04-17 06:56:44.872381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.872498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.872522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.480 qpair failed and we were unable to recover it. 00:30:40.480 [2024-04-17 06:56:44.872650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.872775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.480 [2024-04-17 06:56:44.872801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.480 qpair failed and we were unable to recover it. 00:30:40.481 [2024-04-17 06:56:44.872961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.873116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.873141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.481 qpair failed and we were unable to recover it. 00:30:40.481 [2024-04-17 06:56:44.873299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.873420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.873444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.481 qpair failed and we were unable to recover it. 00:30:40.481 [2024-04-17 06:56:44.873570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.873694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.873720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.481 qpair failed and we were unable to recover it. 00:30:40.481 [2024-04-17 06:56:44.873843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.873993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.874017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.481 qpair failed and we were unable to recover it. 00:30:40.481 [2024-04-17 06:56:44.874185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.874320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.874344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.481 qpair failed and we were unable to recover it. 00:30:40.481 [2024-04-17 06:56:44.874480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.874604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.874630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.481 qpair failed and we were unable to recover it. 00:30:40.481 [2024-04-17 06:56:44.874781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.874902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.874926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.481 qpair failed and we were unable to recover it. 00:30:40.481 [2024-04-17 06:56:44.875054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.875204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.875230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.481 qpair failed and we were unable to recover it. 00:30:40.481 [2024-04-17 06:56:44.875373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.875501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.875526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.481 qpair failed and we were unable to recover it. 00:30:40.481 [2024-04-17 06:56:44.875653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.875780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.875806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.481 qpair failed and we were unable to recover it. 00:30:40.481 [2024-04-17 06:56:44.875995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.876118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.876144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.481 qpair failed and we were unable to recover it. 00:30:40.481 [2024-04-17 06:56:44.876272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.876402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.876428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.481 qpair failed and we were unable to recover it. 00:30:40.481 [2024-04-17 06:56:44.876567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.876720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.876745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.481 qpair failed and we were unable to recover it. 00:30:40.481 [2024-04-17 06:56:44.876868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.877024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.877048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.481 qpair failed and we were unable to recover it. 00:30:40.481 [2024-04-17 06:56:44.877168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.877306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.877330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.481 qpair failed and we were unable to recover it. 00:30:40.481 [2024-04-17 06:56:44.877460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.877588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.877613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.481 qpair failed and we were unable to recover it. 00:30:40.481 [2024-04-17 06:56:44.877768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.877887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.877911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.481 qpair failed and we were unable to recover it. 00:30:40.481 [2024-04-17 06:56:44.878033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.878157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.878187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.481 qpair failed and we were unable to recover it. 00:30:40.481 [2024-04-17 06:56:44.878320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.878434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.878458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.481 qpair failed and we were unable to recover it. 00:30:40.481 [2024-04-17 06:56:44.878593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.878710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.878734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.481 qpair failed and we were unable to recover it. 00:30:40.481 [2024-04-17 06:56:44.878919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.879051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.879075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.481 qpair failed and we were unable to recover it. 00:30:40.481 [2024-04-17 06:56:44.879275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.879400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.879424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.481 qpair failed and we were unable to recover it. 00:30:40.481 [2024-04-17 06:56:44.879566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.879719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.879743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.481 qpair failed and we were unable to recover it. 00:30:40.481 [2024-04-17 06:56:44.879903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.880041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.880066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.481 qpair failed and we were unable to recover it. 00:30:40.481 [2024-04-17 06:56:44.880190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.880317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.880341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.481 qpair failed and we were unable to recover it. 00:30:40.481 [2024-04-17 06:56:44.880467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.880607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.880632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.481 qpair failed and we were unable to recover it. 00:30:40.481 [2024-04-17 06:56:44.880758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.481 [2024-04-17 06:56:44.880875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.880899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.482 qpair failed and we were unable to recover it. 00:30:40.482 [2024-04-17 06:56:44.881046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.881160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.881191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.482 qpair failed and we were unable to recover it. 00:30:40.482 [2024-04-17 06:56:44.881334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.881454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.881479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.482 qpair failed and we were unable to recover it. 00:30:40.482 [2024-04-17 06:56:44.881630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.881794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.881819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.482 qpair failed and we were unable to recover it. 00:30:40.482 [2024-04-17 06:56:44.881952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.882103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.882128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.482 qpair failed and we were unable to recover it. 00:30:40.482 [2024-04-17 06:56:44.882277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.882418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.882443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.482 qpair failed and we were unable to recover it. 00:30:40.482 [2024-04-17 06:56:44.882598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.882730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.882754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.482 qpair failed and we were unable to recover it. 00:30:40.482 [2024-04-17 06:56:44.882920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.883079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.883103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.482 qpair failed and we were unable to recover it. 00:30:40.482 [2024-04-17 06:56:44.883243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.883376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.883400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.482 qpair failed and we were unable to recover it. 00:30:40.482 [2024-04-17 06:56:44.883568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.883715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.883739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.482 qpair failed and we were unable to recover it. 00:30:40.482 [2024-04-17 06:56:44.883878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.884032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.884056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.482 qpair failed and we were unable to recover it. 00:30:40.482 [2024-04-17 06:56:44.884217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.884346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.884371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.482 qpair failed and we were unable to recover it. 00:30:40.482 [2024-04-17 06:56:44.884503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.884626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.884651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.482 qpair failed and we were unable to recover it. 00:30:40.482 [2024-04-17 06:56:44.884777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.884907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.884931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.482 qpair failed and we were unable to recover it. 00:30:40.482 [2024-04-17 06:56:44.885094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.885241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.885266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.482 qpair failed and we were unable to recover it. 00:30:40.482 [2024-04-17 06:56:44.885407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.885559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.885584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.482 qpair failed and we were unable to recover it. 00:30:40.482 [2024-04-17 06:56:44.885704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.885838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.885863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.482 qpair failed and we were unable to recover it. 00:30:40.482 [2024-04-17 06:56:44.885986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.886110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.886134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.482 qpair failed and we were unable to recover it. 00:30:40.482 [2024-04-17 06:56:44.886326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.886461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.886486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.482 qpair failed and we were unable to recover it. 00:30:40.482 [2024-04-17 06:56:44.886621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.886781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.886805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.482 qpair failed and we were unable to recover it. 00:30:40.482 [2024-04-17 06:56:44.886927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.887049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.887075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.482 qpair failed and we were unable to recover it. 00:30:40.482 [2024-04-17 06:56:44.887218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.887341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.482 [2024-04-17 06:56:44.887365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.482 qpair failed and we were unable to recover it. 00:30:40.482 [2024-04-17 06:56:44.887519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.887649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.887676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.483 qpair failed and we were unable to recover it. 00:30:40.483 [2024-04-17 06:56:44.887819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.887940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.887964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.483 qpair failed and we were unable to recover it. 00:30:40.483 [2024-04-17 06:56:44.888093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.888254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.888279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.483 qpair failed and we were unable to recover it. 00:30:40.483 [2024-04-17 06:56:44.888407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.888576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.888600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.483 qpair failed and we were unable to recover it. 00:30:40.483 [2024-04-17 06:56:44.888729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.888855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.888880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.483 qpair failed and we were unable to recover it. 00:30:40.483 [2024-04-17 06:56:44.889037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.889184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.889209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.483 qpair failed and we were unable to recover it. 00:30:40.483 [2024-04-17 06:56:44.889328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.889453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.889491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.483 qpair failed and we were unable to recover it. 00:30:40.483 [2024-04-17 06:56:44.889658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.889780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.889804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.483 qpair failed and we were unable to recover it. 00:30:40.483 [2024-04-17 06:56:44.889926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.890093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.890118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.483 qpair failed and we were unable to recover it. 00:30:40.483 [2024-04-17 06:56:44.890246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.890368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.890392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.483 qpair failed and we were unable to recover it. 00:30:40.483 [2024-04-17 06:56:44.890520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.890648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.890672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.483 qpair failed and we were unable to recover it. 00:30:40.483 [2024-04-17 06:56:44.890831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.891006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.891031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.483 qpair failed and we were unable to recover it. 00:30:40.483 [2024-04-17 06:56:44.891179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.891303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.891327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.483 qpair failed and we were unable to recover it. 00:30:40.483 [2024-04-17 06:56:44.891455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.891614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.891638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.483 qpair failed and we were unable to recover it. 00:30:40.483 [2024-04-17 06:56:44.891761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.891883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.891909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.483 qpair failed and we were unable to recover it. 00:30:40.483 [2024-04-17 06:56:44.892045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.892186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.892211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.483 qpair failed and we were unable to recover it. 00:30:40.483 [2024-04-17 06:56:44.892361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.892488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.892513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.483 qpair failed and we were unable to recover it. 00:30:40.483 [2024-04-17 06:56:44.892641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.892782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.892806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.483 qpair failed and we were unable to recover it. 00:30:40.483 [2024-04-17 06:56:44.892936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.893084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.893108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.483 qpair failed and we were unable to recover it. 00:30:40.483 [2024-04-17 06:56:44.893255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.893397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.893423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.483 qpair failed and we were unable to recover it. 00:30:40.483 [2024-04-17 06:56:44.893578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.893716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.893741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.483 qpair failed and we were unable to recover it. 00:30:40.483 [2024-04-17 06:56:44.893905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.894028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.894052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.483 qpair failed and we were unable to recover it. 00:30:40.483 [2024-04-17 06:56:44.894185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.894336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.894360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.483 qpair failed and we were unable to recover it. 00:30:40.483 [2024-04-17 06:56:44.894487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.894640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.894665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.483 qpair failed and we were unable to recover it. 00:30:40.483 [2024-04-17 06:56:44.894807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.894941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.894966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.483 qpair failed and we were unable to recover it. 00:30:40.483 [2024-04-17 06:56:44.895094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.895242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.895267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.483 qpair failed and we were unable to recover it. 00:30:40.483 [2024-04-17 06:56:44.895395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.895522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.483 [2024-04-17 06:56:44.895546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.484 qpair failed and we were unable to recover it. 00:30:40.484 [2024-04-17 06:56:44.895675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.895806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.895832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.484 qpair failed and we were unable to recover it. 00:30:40.484 [2024-04-17 06:56:44.895967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.896094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.896119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.484 qpair failed and we were unable to recover it. 00:30:40.484 [2024-04-17 06:56:44.896306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.896432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.896457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.484 qpair failed and we were unable to recover it. 00:30:40.484 [2024-04-17 06:56:44.896587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.896716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.896740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.484 qpair failed and we were unable to recover it. 00:30:40.484 [2024-04-17 06:56:44.896867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.896987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.897011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.484 qpair failed and we were unable to recover it. 00:30:40.484 [2024-04-17 06:56:44.897144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.897305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.897331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.484 qpair failed and we were unable to recover it. 00:30:40.484 [2024-04-17 06:56:44.897456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.897581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.897606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.484 qpair failed and we were unable to recover it. 00:30:40.484 [2024-04-17 06:56:44.897735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.897858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.897883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.484 qpair failed and we were unable to recover it. 00:30:40.484 [2024-04-17 06:56:44.898022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.898160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.898192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.484 qpair failed and we were unable to recover it. 00:30:40.484 [2024-04-17 06:56:44.898321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.898454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.898479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.484 qpair failed and we were unable to recover it. 00:30:40.484 [2024-04-17 06:56:44.898627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.898751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.898775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.484 qpair failed and we were unable to recover it. 00:30:40.484 [2024-04-17 06:56:44.898915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.899070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.899095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.484 qpair failed and we were unable to recover it. 00:30:40.484 [2024-04-17 06:56:44.899243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.899378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.899404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.484 qpair failed and we were unable to recover it. 00:30:40.484 [2024-04-17 06:56:44.899563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.899698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.899724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.484 qpair failed and we were unable to recover it. 00:30:40.484 [2024-04-17 06:56:44.899861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.900008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.900032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.484 qpair failed and we were unable to recover it. 00:30:40.484 [2024-04-17 06:56:44.900164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.900394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.900422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.484 qpair failed and we were unable to recover it. 00:30:40.484 [2024-04-17 06:56:44.900557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.900710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.900734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.484 qpair failed and we were unable to recover it. 00:30:40.484 [2024-04-17 06:56:44.900892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.901049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.901073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.484 qpair failed and we were unable to recover it. 00:30:40.484 [2024-04-17 06:56:44.901207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.901334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.901360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.484 qpair failed and we were unable to recover it. 00:30:40.484 [2024-04-17 06:56:44.901494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.901631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.901655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.484 qpair failed and we were unable to recover it. 00:30:40.484 [2024-04-17 06:56:44.901802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.901934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.901959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.484 qpair failed and we were unable to recover it. 00:30:40.484 [2024-04-17 06:56:44.902127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.902291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.902316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.484 qpair failed and we were unable to recover it. 00:30:40.484 [2024-04-17 06:56:44.902472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.902628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.902652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.484 qpair failed and we were unable to recover it. 00:30:40.484 [2024-04-17 06:56:44.902788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.902934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.902963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.484 qpair failed and we were unable to recover it. 00:30:40.484 [2024-04-17 06:56:44.903088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.903242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.903268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.484 qpair failed and we were unable to recover it. 00:30:40.484 [2024-04-17 06:56:44.903394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.903534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.903559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.484 qpair failed and we were unable to recover it. 00:30:40.484 [2024-04-17 06:56:44.903713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.903866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.484 [2024-04-17 06:56:44.903891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.484 qpair failed and we were unable to recover it. 00:30:40.484 [2024-04-17 06:56:44.904023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.904162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.904194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.485 qpair failed and we were unable to recover it. 00:30:40.485 [2024-04-17 06:56:44.904322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.904447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.904471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.485 qpair failed and we were unable to recover it. 00:30:40.485 [2024-04-17 06:56:44.904628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.904782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.904807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.485 qpair failed and we were unable to recover it. 00:30:40.485 [2024-04-17 06:56:44.904940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.905094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.905118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.485 qpair failed and we were unable to recover it. 00:30:40.485 [2024-04-17 06:56:44.905250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.905389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.905413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.485 qpair failed and we were unable to recover it. 00:30:40.485 [2024-04-17 06:56:44.905578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.905705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.905729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.485 qpair failed and we were unable to recover it. 00:30:40.485 [2024-04-17 06:56:44.905883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.906043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.906071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.485 qpair failed and we were unable to recover it. 00:30:40.485 [2024-04-17 06:56:44.906198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.906333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.906357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.485 qpair failed and we were unable to recover it. 00:30:40.485 [2024-04-17 06:56:44.906506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.906643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.906668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.485 qpair failed and we were unable to recover it. 00:30:40.485 [2024-04-17 06:56:44.906800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.906929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.906955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.485 qpair failed and we were unable to recover it. 00:30:40.485 [2024-04-17 06:56:44.907112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.907288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.907314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.485 qpair failed and we were unable to recover it. 00:30:40.485 [2024-04-17 06:56:44.907460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.907587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.907612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.485 qpair failed and we were unable to recover it. 00:30:40.485 [2024-04-17 06:56:44.907750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.907920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.907944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.485 qpair failed and we were unable to recover it. 00:30:40.485 [2024-04-17 06:56:44.908068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.908260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.908285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.485 qpair failed and we were unable to recover it. 00:30:40.485 [2024-04-17 06:56:44.908457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.908576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.908600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.485 qpair failed and we were unable to recover it. 00:30:40.485 [2024-04-17 06:56:44.908739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.908860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.908886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.485 qpair failed and we were unable to recover it. 00:30:40.485 [2024-04-17 06:56:44.909018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.909156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.909193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.485 qpair failed and we were unable to recover it. 00:30:40.485 [2024-04-17 06:56:44.909321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.909451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.909475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.485 qpair failed and we were unable to recover it. 00:30:40.485 [2024-04-17 06:56:44.909604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.909752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.909776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.485 qpair failed and we were unable to recover it. 00:30:40.485 [2024-04-17 06:56:44.909927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.910056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.910080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.485 qpair failed and we were unable to recover it. 00:30:40.485 [2024-04-17 06:56:44.910234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.910392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.910417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.485 qpair failed and we were unable to recover it. 00:30:40.485 [2024-04-17 06:56:44.910570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.910689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.910713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.485 qpair failed and we were unable to recover it. 00:30:40.485 [2024-04-17 06:56:44.910837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.910971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.910996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.485 qpair failed and we were unable to recover it. 00:30:40.485 [2024-04-17 06:56:44.911120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.911281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.911306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.485 qpair failed and we were unable to recover it. 00:30:40.485 06:56:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:40.485 [2024-04-17 06:56:44.911458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 06:56:44 -- common/autotest_common.sh@850 -- # return 0 00:30:40.485 [2024-04-17 06:56:44.911595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.911620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b9 06:56:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:30:40.485 0 with addr=10.0.0.2, port=4420 00:30:40.485 qpair failed and we were unable to recover it. 00:30:40.485 06:56:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:40.485 [2024-04-17 06:56:44.911753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 06:56:44 -- common/autotest_common.sh@10 -- # set +x 00:30:40.485 [2024-04-17 06:56:44.911876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.911901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.485 qpair failed and we were unable to recover it. 00:30:40.485 [2024-04-17 06:56:44.912066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.485 [2024-04-17 06:56:44.912250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.912275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.486 qpair failed and we were unable to recover it. 00:30:40.486 [2024-04-17 06:56:44.912406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.912556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.912579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.486 qpair failed and we were unable to recover it. 00:30:40.486 [2024-04-17 06:56:44.912710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.912840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.912864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.486 qpair failed and we were unable to recover it. 00:30:40.486 [2024-04-17 06:56:44.913030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.913149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.913181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.486 qpair failed and we were unable to recover it. 00:30:40.486 [2024-04-17 06:56:44.913321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.913471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.913495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.486 qpair failed and we were unable to recover it. 00:30:40.486 [2024-04-17 06:56:44.913666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.913814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.913839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.486 qpair failed and we were unable to recover it. 00:30:40.486 [2024-04-17 06:56:44.913963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.914098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.914122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.486 qpair failed and we were unable to recover it. 00:30:40.486 [2024-04-17 06:56:44.914263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.914388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.914412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.486 qpair failed and we were unable to recover it. 00:30:40.486 [2024-04-17 06:56:44.914552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.914718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.914742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.486 qpair failed and we were unable to recover it. 00:30:40.486 [2024-04-17 06:56:44.914899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.915023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.915048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.486 qpair failed and we were unable to recover it. 00:30:40.486 [2024-04-17 06:56:44.915228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.915368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.915392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.486 qpair failed and we were unable to recover it. 00:30:40.486 [2024-04-17 06:56:44.915553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.915716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.915740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.486 qpair failed and we were unable to recover it. 00:30:40.486 [2024-04-17 06:56:44.915920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.916039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.916063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.486 qpair failed and we were unable to recover it. 00:30:40.486 [2024-04-17 06:56:44.916189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.916343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.916368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.486 qpair failed and we were unable to recover it. 00:30:40.486 [2024-04-17 06:56:44.916524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.916668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.916692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.486 qpair failed and we were unable to recover it. 00:30:40.486 [2024-04-17 06:56:44.916819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.916974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.916999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.486 qpair failed and we were unable to recover it. 00:30:40.486 [2024-04-17 06:56:44.917156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.917317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.917343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.486 qpair failed and we were unable to recover it. 00:30:40.486 [2024-04-17 06:56:44.917474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.917606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.917630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.486 qpair failed and we were unable to recover it. 00:30:40.486 [2024-04-17 06:56:44.917799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.917927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.917953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.486 qpair failed and we were unable to recover it. 00:30:40.486 [2024-04-17 06:56:44.918105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.918225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.918250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.486 qpair failed and we were unable to recover it. 00:30:40.486 [2024-04-17 06:56:44.918439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.918624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.918659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.486 qpair failed and we were unable to recover it. 00:30:40.486 [2024-04-17 06:56:44.918813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.918948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.918973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.486 qpair failed and we were unable to recover it. 00:30:40.486 [2024-04-17 06:56:44.919094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.919262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.919287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.486 qpair failed and we were unable to recover it. 00:30:40.486 [2024-04-17 06:56:44.919421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.919559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.919586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.486 qpair failed and we were unable to recover it. 00:30:40.486 [2024-04-17 06:56:44.919716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.919863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.919888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.486 qpair failed and we were unable to recover it. 00:30:40.486 [2024-04-17 06:56:44.920012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.920169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.920199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.486 qpair failed and we were unable to recover it. 00:30:40.486 [2024-04-17 06:56:44.920323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.486 [2024-04-17 06:56:44.920470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.920496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.487 qpair failed and we were unable to recover it. 00:30:40.487 [2024-04-17 06:56:44.920641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.920769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.920796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.487 qpair failed and we were unable to recover it. 00:30:40.487 [2024-04-17 06:56:44.920952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.921091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.921117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.487 qpair failed and we were unable to recover it. 00:30:40.487 [2024-04-17 06:56:44.921258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.921399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.921424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.487 qpair failed and we were unable to recover it. 00:30:40.487 [2024-04-17 06:56:44.921587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.921709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.921746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.487 qpair failed and we were unable to recover it. 00:30:40.487 [2024-04-17 06:56:44.921900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.922031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.922064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.487 qpair failed and we were unable to recover it. 00:30:40.487 [2024-04-17 06:56:44.922198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.922332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.922357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.487 qpair failed and we were unable to recover it. 00:30:40.487 [2024-04-17 06:56:44.922488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.922647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.922672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.487 qpair failed and we were unable to recover it. 00:30:40.487 [2024-04-17 06:56:44.922808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.922963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.922989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.487 qpair failed and we were unable to recover it. 00:30:40.487 [2024-04-17 06:56:44.923142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.923284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.923310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.487 qpair failed and we were unable to recover it. 00:30:40.487 [2024-04-17 06:56:44.923452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.923594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.923620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.487 qpair failed and we were unable to recover it. 00:30:40.487 [2024-04-17 06:56:44.923745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.923863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.923896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.487 qpair failed and we were unable to recover it. 00:30:40.487 [2024-04-17 06:56:44.924023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.924173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.924206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.487 qpair failed and we were unable to recover it. 00:30:40.487 [2024-04-17 06:56:44.924328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.924462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.924488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.487 qpair failed and we were unable to recover it. 00:30:40.487 [2024-04-17 06:56:44.924620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.924749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.924775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.487 qpair failed and we were unable to recover it. 00:30:40.487 [2024-04-17 06:56:44.924909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.925058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.925084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.487 qpair failed and we were unable to recover it. 00:30:40.487 [2024-04-17 06:56:44.925235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.925363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.925390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.487 qpair failed and we were unable to recover it. 00:30:40.487 [2024-04-17 06:56:44.925518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.925659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.925685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.487 qpair failed and we were unable to recover it. 00:30:40.487 [2024-04-17 06:56:44.925826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.925946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.925971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.487 qpair failed and we were unable to recover it. 00:30:40.487 [2024-04-17 06:56:44.926124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.926290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.926315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.487 qpair failed and we were unable to recover it. 00:30:40.487 [2024-04-17 06:56:44.926437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.926581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.926607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.487 qpair failed and we were unable to recover it. 00:30:40.487 [2024-04-17 06:56:44.926732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.926863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.926888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.487 qpair failed and we were unable to recover it. 00:30:40.487 [2024-04-17 06:56:44.927061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.487 [2024-04-17 06:56:44.927218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.927245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.488 qpair failed and we were unable to recover it. 00:30:40.488 [2024-04-17 06:56:44.927370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 06:56:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:40.488 [2024-04-17 06:56:44.927489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.927515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.488 qpair failed and we were unable to recover it. 00:30:40.488 06:56:44 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:40.488 [2024-04-17 06:56:44.927656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 06:56:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.488 [2024-04-17 06:56:44.927783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.927809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.488 qpair failed and we were unable to recover it. 00:30:40.488 06:56:44 -- common/autotest_common.sh@10 -- # set +x 00:30:40.488 [2024-04-17 06:56:44.927952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.928095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.928119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.488 qpair failed and we were unable to recover it. 00:30:40.488 [2024-04-17 06:56:44.928249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.928369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.928394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.488 qpair failed and we were unable to recover it. 00:30:40.488 [2024-04-17 06:56:44.928583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.928713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.928737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.488 qpair failed and we were unable to recover it. 00:30:40.488 [2024-04-17 06:56:44.928873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.929008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.929032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.488 qpair failed and we were unable to recover it. 00:30:40.488 [2024-04-17 06:56:44.929194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.929315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.929339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.488 qpair failed and we were unable to recover it. 00:30:40.488 [2024-04-17 06:56:44.929498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.929618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.929643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.488 qpair failed and we were unable to recover it. 00:30:40.488 [2024-04-17 06:56:44.929761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.929939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.929963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.488 qpair failed and we were unable to recover it. 00:30:40.488 [2024-04-17 06:56:44.930078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.930242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.930267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.488 qpair failed and we were unable to recover it. 00:30:40.488 [2024-04-17 06:56:44.930432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.930600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.930628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.488 qpair failed and we were unable to recover it. 00:30:40.488 [2024-04-17 06:56:44.930760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.930874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.930899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.488 qpair failed and we were unable to recover it. 00:30:40.488 [2024-04-17 06:56:44.931046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.931194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.931220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.488 qpair failed and we were unable to recover it. 00:30:40.488 [2024-04-17 06:56:44.931375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.931505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.931530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.488 qpair failed and we were unable to recover it. 00:30:40.488 [2024-04-17 06:56:44.931683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.931838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.931862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.488 qpair failed and we were unable to recover it. 00:30:40.488 [2024-04-17 06:56:44.932023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.932179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.932204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.488 qpair failed and we were unable to recover it. 00:30:40.488 [2024-04-17 06:56:44.932326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.932443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.932467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.488 qpair failed and we were unable to recover it. 00:30:40.488 [2024-04-17 06:56:44.932613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.932743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.932767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.488 qpair failed and we were unable to recover it. 00:30:40.488 [2024-04-17 06:56:44.932917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.933066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.933090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.488 qpair failed and we were unable to recover it. 00:30:40.488 [2024-04-17 06:56:44.933240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.933359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.933383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.488 qpair failed and we were unable to recover it. 00:30:40.488 [2024-04-17 06:56:44.933542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.933659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.933687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.488 qpair failed and we were unable to recover it. 00:30:40.488 [2024-04-17 06:56:44.933804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.933953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.933977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.488 qpair failed and we were unable to recover it. 00:30:40.488 [2024-04-17 06:56:44.934123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.934337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.934363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.488 qpair failed and we were unable to recover it. 00:30:40.488 [2024-04-17 06:56:44.934532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.934655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.934680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.488 qpair failed and we were unable to recover it. 00:30:40.488 [2024-04-17 06:56:44.934803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.934980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.935004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.488 qpair failed and we were unable to recover it. 00:30:40.488 [2024-04-17 06:56:44.935128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.935299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.935325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.488 qpair failed and we were unable to recover it. 00:30:40.488 [2024-04-17 06:56:44.935477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.488 [2024-04-17 06:56:44.935598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.935622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.489 qpair failed and we were unable to recover it. 00:30:40.489 [2024-04-17 06:56:44.935746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.935908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.935933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.489 qpair failed and we were unable to recover it. 00:30:40.489 [2024-04-17 06:56:44.936085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.936239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.936264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.489 qpair failed and we were unable to recover it. 00:30:40.489 [2024-04-17 06:56:44.936404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.936553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.936577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.489 qpair failed and we were unable to recover it. 00:30:40.489 [2024-04-17 06:56:44.936714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.936834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.936863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.489 qpair failed and we were unable to recover it. 00:30:40.489 [2024-04-17 06:56:44.936992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.937141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.937166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.489 qpair failed and we were unable to recover it. 00:30:40.489 [2024-04-17 06:56:44.937304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.937487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.937512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.489 qpair failed and we were unable to recover it. 00:30:40.489 [2024-04-17 06:56:44.937631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.937756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.937780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.489 qpair failed and we were unable to recover it. 00:30:40.489 [2024-04-17 06:56:44.937926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.938051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.938076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.489 qpair failed and we were unable to recover it. 00:30:40.489 [2024-04-17 06:56:44.938200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.938324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.938348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.489 qpair failed and we were unable to recover it. 00:30:40.489 [2024-04-17 06:56:44.938473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.938628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.938652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.489 qpair failed and we were unable to recover it. 00:30:40.489 [2024-04-17 06:56:44.938806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.938928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.938953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.489 qpair failed and we were unable to recover it. 00:30:40.489 [2024-04-17 06:56:44.939111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.939268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.939293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.489 qpair failed and we were unable to recover it. 00:30:40.489 [2024-04-17 06:56:44.939454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.939606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.939631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.489 qpair failed and we were unable to recover it. 00:30:40.489 [2024-04-17 06:56:44.939757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.939915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.939940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.489 qpair failed and we were unable to recover it. 00:30:40.489 [2024-04-17 06:56:44.940095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.940262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.940287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.489 qpair failed and we were unable to recover it. 00:30:40.489 [2024-04-17 06:56:44.940459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.940593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.940617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.489 qpair failed and we were unable to recover it. 00:30:40.489 [2024-04-17 06:56:44.940786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.940932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.940956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.489 qpair failed and we were unable to recover it. 00:30:40.489 [2024-04-17 06:56:44.941082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.941250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.941275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.489 qpair failed and we were unable to recover it. 00:30:40.489 [2024-04-17 06:56:44.941435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.941605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.941629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.489 qpair failed and we were unable to recover it. 00:30:40.489 [2024-04-17 06:56:44.941825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.941949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.941973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.489 qpair failed and we were unable to recover it. 00:30:40.489 [2024-04-17 06:56:44.942140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.942336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.942362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.489 qpair failed and we were unable to recover it. 00:30:40.489 [2024-04-17 06:56:44.942593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.942781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.942806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.489 qpair failed and we were unable to recover it. 00:30:40.489 [2024-04-17 06:56:44.942924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.943054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.943078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.489 qpair failed and we were unable to recover it. 00:30:40.489 [2024-04-17 06:56:44.943208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.943387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.943412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.489 qpair failed and we were unable to recover it. 00:30:40.489 [2024-04-17 06:56:44.943549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.943704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.943729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.489 qpair failed and we were unable to recover it. 00:30:40.489 [2024-04-17 06:56:44.943883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.944038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.489 [2024-04-17 06:56:44.944064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.489 qpair failed and we were unable to recover it. 00:30:40.490 [2024-04-17 06:56:44.944215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.944344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.944369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.490 qpair failed and we were unable to recover it. 00:30:40.490 [2024-04-17 06:56:44.944502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.944631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.944656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.490 qpair failed and we were unable to recover it. 00:30:40.490 [2024-04-17 06:56:44.944811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.944964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.944989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.490 qpair failed and we were unable to recover it. 00:30:40.490 [2024-04-17 06:56:44.945146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.945276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.945301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.490 qpair failed and we were unable to recover it. 00:30:40.490 [2024-04-17 06:56:44.945430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.945564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.945589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.490 qpair failed and we were unable to recover it. 00:30:40.490 [2024-04-17 06:56:44.945754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.945877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.945902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.490 qpair failed and we were unable to recover it. 00:30:40.490 [2024-04-17 06:56:44.946042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.946181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.946206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.490 qpair failed and we were unable to recover it. 00:30:40.490 [2024-04-17 06:56:44.946359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.946528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.946552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.490 qpair failed and we were unable to recover it. 00:30:40.490 [2024-04-17 06:56:44.946681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.946808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.946832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.490 qpair failed and we were unable to recover it. 00:30:40.490 [2024-04-17 06:56:44.946990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.947145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.947171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.490 qpair failed and we were unable to recover it. 00:30:40.490 [2024-04-17 06:56:44.947369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.947495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.947520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.490 qpair failed and we were unable to recover it. 00:30:40.490 [2024-04-17 06:56:44.947641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.947808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.947833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.490 qpair failed and we were unable to recover it. 00:30:40.490 [2024-04-17 06:56:44.947959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.948081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.948106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.490 qpair failed and we were unable to recover it. 00:30:40.490 [2024-04-17 06:56:44.948262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.948386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.948411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.490 qpair failed and we were unable to recover it. 00:30:40.490 [2024-04-17 06:56:44.948597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.948745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.948769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.490 qpair failed and we were unable to recover it. 00:30:40.490 [2024-04-17 06:56:44.948900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.949054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.949078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.490 qpair failed and we were unable to recover it. 00:30:40.490 [2024-04-17 06:56:44.949204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.949342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.949366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.490 qpair failed and we were unable to recover it. 00:30:40.490 [2024-04-17 06:56:44.949519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.949644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.949669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.490 qpair failed and we were unable to recover it. 00:30:40.490 [2024-04-17 06:56:44.949801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.949956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.949981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.490 qpair failed and we were unable to recover it. 00:30:40.490 [2024-04-17 06:56:44.950130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.950274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.950301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.490 qpair failed and we were unable to recover it. 00:30:40.490 [2024-04-17 06:56:44.950439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.950608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.950633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.490 qpair failed and we were unable to recover it. 00:30:40.490 Malloc0 00:30:40.490 [2024-04-17 06:56:44.950765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.950892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.950916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.490 qpair failed and we were unable to recover it. 00:30:40.490 [2024-04-17 06:56:44.951042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 06:56:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.490 [2024-04-17 06:56:44.951193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.951218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.490 qpair failed and we were unable to recover it. 00:30:40.490 06:56:44 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:40.490 [2024-04-17 06:56:44.951377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 06:56:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.490 [2024-04-17 06:56:44.951501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.951526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b9 06:56:44 -- common/autotest_common.sh@10 -- # set +x 00:30:40.490 0 with addr=10.0.0.2, port=4420 00:30:40.490 qpair failed and we were unable to recover it. 00:30:40.490 [2024-04-17 06:56:44.951648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.951774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.951798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.490 qpair failed and we were unable to recover it. 00:30:40.490 [2024-04-17 06:56:44.951931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.490 [2024-04-17 06:56:44.952089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.952115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.491 qpair failed and we were unable to recover it. 00:30:40.491 [2024-04-17 06:56:44.952278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.952423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.952447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.491 qpair failed and we were unable to recover it. 00:30:40.491 [2024-04-17 06:56:44.952572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.952730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.952755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.491 qpair failed and we were unable to recover it. 00:30:40.491 [2024-04-17 06:56:44.952880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.953036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.953061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.491 qpair failed and we were unable to recover it. 00:30:40.491 [2024-04-17 06:56:44.953191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.953317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.953341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.491 qpair failed and we were unable to recover it. 00:30:40.491 [2024-04-17 06:56:44.953478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.953632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.953657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.491 qpair failed and we were unable to recover it. 00:30:40.491 [2024-04-17 06:56:44.953824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.953947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.953971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.491 qpair failed and we were unable to recover it. 00:30:40.491 [2024-04-17 06:56:44.954098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.954250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.954275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.491 qpair failed and we were unable to recover it. 00:30:40.491 [2024-04-17 06:56:44.954401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.954449] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:40.491 [2024-04-17 06:56:44.954524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.954550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.491 qpair failed and we were unable to recover it. 00:30:40.491 [2024-04-17 06:56:44.954677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.954819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.954843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.491 qpair failed and we were unable to recover it. 00:30:40.491 [2024-04-17 06:56:44.954962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.955142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.955168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.491 qpair failed and we were unable to recover it. 00:30:40.491 [2024-04-17 06:56:44.955302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.955432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.955456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.491 qpair failed and we were unable to recover it. 00:30:40.491 [2024-04-17 06:56:44.955624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.955746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.955770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.491 qpair failed and we were unable to recover it. 00:30:40.491 [2024-04-17 06:56:44.955923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.956052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.956077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.491 qpair failed and we were unable to recover it. 00:30:40.491 [2024-04-17 06:56:44.956253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.956381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.956406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.491 qpair failed and we were unable to recover it. 00:30:40.491 [2024-04-17 06:56:44.956560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.956681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.956706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.491 qpair failed and we were unable to recover it. 00:30:40.491 [2024-04-17 06:56:44.956840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.957022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.957046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.491 qpair failed and we were unable to recover it. 00:30:40.491 [2024-04-17 06:56:44.957198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.957330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.957355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.491 qpair failed and we were unable to recover it. 00:30:40.491 [2024-04-17 06:56:44.957509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.957664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.957689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.491 qpair failed and we were unable to recover it. 00:30:40.491 [2024-04-17 06:56:44.957809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.957933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.957959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.491 qpair failed and we were unable to recover it. 00:30:40.491 [2024-04-17 06:56:44.958107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.958264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.958289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.491 qpair failed and we were unable to recover it. 00:30:40.491 [2024-04-17 06:56:44.958417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.958578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.958602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.491 qpair failed and we were unable to recover it. 00:30:40.491 [2024-04-17 06:56:44.958738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.958919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.958943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.491 qpair failed and we were unable to recover it. 00:30:40.491 [2024-04-17 06:56:44.959065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.959194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.959219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.491 qpair failed and we were unable to recover it. 00:30:40.491 [2024-04-17 06:56:44.959346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.959463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.959487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.491 qpair failed and we were unable to recover it. 00:30:40.491 [2024-04-17 06:56:44.959649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.959799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.959824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.491 qpair failed and we were unable to recover it. 00:30:40.491 [2024-04-17 06:56:44.959991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.960126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.491 [2024-04-17 06:56:44.960150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.492 qpair failed and we were unable to recover it. 00:30:40.492 [2024-04-17 06:56:44.960337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.960465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.960489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.492 qpair failed and we were unable to recover it. 00:30:40.492 [2024-04-17 06:56:44.960625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.960809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.960833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.492 qpair failed and we were unable to recover it. 00:30:40.492 [2024-04-17 06:56:44.960977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.961158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.961195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.492 qpair failed and we were unable to recover it. 00:30:40.492 [2024-04-17 06:56:44.961337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.961464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.961489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.492 qpair failed and we were unable to recover it. 00:30:40.492 [2024-04-17 06:56:44.961624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.961781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.961807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.492 qpair failed and we were unable to recover it. 00:30:40.492 [2024-04-17 06:56:44.961935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.962081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.962106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.492 qpair failed and we were unable to recover it. 00:30:40.492 [2024-04-17 06:56:44.962278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.962436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.962462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.492 qpair failed and we were unable to recover it. 00:30:40.492 [2024-04-17 06:56:44.962592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 06:56:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.492 [2024-04-17 06:56:44.962717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.962742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.492 qpair failed and we were unable to recover it. 00:30:40.492 06:56:44 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:40.492 [2024-04-17 06:56:44.962875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 06:56:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.492 [2024-04-17 06:56:44.963017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.963042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.492 qpair failed and we were unable to recover it. 00:30:40.492 06:56:44 -- common/autotest_common.sh@10 -- # set +x 00:30:40.492 [2024-04-17 06:56:44.963179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.963332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.963356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.492 qpair failed and we were unable to recover it. 00:30:40.492 [2024-04-17 06:56:44.963484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.963622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.963647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.492 qpair failed and we were unable to recover it. 00:30:40.492 [2024-04-17 06:56:44.963796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.963952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.963977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.492 qpair failed and we were unable to recover it. 00:30:40.492 [2024-04-17 06:56:44.964100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.964223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.964248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.492 qpair failed and we were unable to recover it. 00:30:40.492 [2024-04-17 06:56:44.964388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.964509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.964533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.492 qpair failed and we were unable to recover it. 00:30:40.492 [2024-04-17 06:56:44.964660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.964785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.964814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.492 qpair failed and we were unable to recover it. 00:30:40.492 [2024-04-17 06:56:44.964933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.965050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.965075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.492 qpair failed and we were unable to recover it. 00:30:40.492 [2024-04-17 06:56:44.965206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.965364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.965390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.492 qpair failed and we were unable to recover it. 00:30:40.492 [2024-04-17 06:56:44.965533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.965652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.965676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.492 qpair failed and we were unable to recover it. 00:30:40.492 [2024-04-17 06:56:44.965817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.492 [2024-04-17 06:56:44.966003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.966027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.493 qpair failed and we were unable to recover it. 00:30:40.493 [2024-04-17 06:56:44.966167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.966320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.966346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.493 qpair failed and we were unable to recover it. 00:30:40.493 [2024-04-17 06:56:44.966474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.966614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.966639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.493 qpair failed and we were unable to recover it. 00:30:40.493 [2024-04-17 06:56:44.966764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.966905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.966930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.493 qpair failed and we were unable to recover it. 00:30:40.493 [2024-04-17 06:56:44.967106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.967244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.967270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.493 qpair failed and we were unable to recover it. 00:30:40.493 [2024-04-17 06:56:44.967427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.967593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.967618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.493 qpair failed and we were unable to recover it. 00:30:40.493 [2024-04-17 06:56:44.967740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.967892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.967920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.493 qpair failed and we were unable to recover it. 00:30:40.493 [2024-04-17 06:56:44.968041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.968162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.968191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.493 qpair failed and we were unable to recover it. 00:30:40.493 [2024-04-17 06:56:44.968332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.968481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.968505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.493 qpair failed and we were unable to recover it. 00:30:40.493 [2024-04-17 06:56:44.968630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.968754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.968777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.493 qpair failed and we were unable to recover it. 00:30:40.493 [2024-04-17 06:56:44.968898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.969054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.969079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.493 qpair failed and we were unable to recover it. 00:30:40.493 [2024-04-17 06:56:44.969216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.969356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.969380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.493 qpair failed and we were unable to recover it. 00:30:40.493 [2024-04-17 06:56:44.969527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.969658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.969682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.493 qpair failed and we were unable to recover it. 00:30:40.493 [2024-04-17 06:56:44.969803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.969942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.969966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.493 qpair failed and we were unable to recover it. 00:30:40.493 [2024-04-17 06:56:44.970088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.970255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.970281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.493 qpair failed and we were unable to recover it. 00:30:40.493 [2024-04-17 06:56:44.970406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.970536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.970560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.493 qpair failed and we were unable to recover it. 00:30:40.493 [2024-04-17 06:56:44.970682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 06:56:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.493 [2024-04-17 06:56:44.970810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.970839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.493 06:56:44 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:40.493 qpair failed and we were unable to recover it. 00:30:40.493 06:56:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.493 [2024-04-17 06:56:44.971005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 06:56:44 -- common/autotest_common.sh@10 -- # set +x 00:30:40.493 [2024-04-17 06:56:44.971130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.971154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.493 qpair failed and we were unable to recover it. 00:30:40.493 [2024-04-17 06:56:44.971285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.971407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.971432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.493 qpair failed and we were unable to recover it. 00:30:40.493 [2024-04-17 06:56:44.971559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.971685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.971709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.493 qpair failed and we were unable to recover it. 00:30:40.493 [2024-04-17 06:56:44.971838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.971964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.971988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.493 qpair failed and we were unable to recover it. 00:30:40.493 [2024-04-17 06:56:44.972114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.972238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.972264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.493 qpair failed and we were unable to recover it. 00:30:40.493 [2024-04-17 06:56:44.972425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.972586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.972611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.493 qpair failed and we were unable to recover it. 00:30:40.493 [2024-04-17 06:56:44.972742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.972921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.972945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.493 qpair failed and we were unable to recover it. 00:30:40.493 [2024-04-17 06:56:44.973081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.973234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.973260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.493 qpair failed and we were unable to recover it. 00:30:40.493 [2024-04-17 06:56:44.973384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.973541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.973566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.493 qpair failed and we were unable to recover it. 00:30:40.493 [2024-04-17 06:56:44.973691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.973869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.973893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.493 qpair failed and we were unable to recover it. 00:30:40.493 [2024-04-17 06:56:44.974016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.493 [2024-04-17 06:56:44.974141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.974165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.494 qpair failed and we were unable to recover it. 00:30:40.494 [2024-04-17 06:56:44.974322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.974482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.974506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.494 qpair failed and we were unable to recover it. 00:30:40.494 [2024-04-17 06:56:44.974644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.974822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.974847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.494 qpair failed and we were unable to recover it. 00:30:40.494 [2024-04-17 06:56:44.974997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.975149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.975173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.494 qpair failed and we were unable to recover it. 00:30:40.494 [2024-04-17 06:56:44.975400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.975581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.975606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.494 qpair failed and we were unable to recover it. 00:30:40.494 [2024-04-17 06:56:44.975731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.975883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.975908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.494 qpair failed and we were unable to recover it. 00:30:40.494 [2024-04-17 06:56:44.976060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.976189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.976215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.494 qpair failed and we were unable to recover it. 00:30:40.494 [2024-04-17 06:56:44.976341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.976467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.976491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.494 qpair failed and we were unable to recover it. 00:30:40.494 [2024-04-17 06:56:44.976651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.976779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.976803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.494 qpair failed and we were unable to recover it. 00:30:40.494 [2024-04-17 06:56:44.976929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.977084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.977109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.494 qpair failed and we were unable to recover it. 00:30:40.494 [2024-04-17 06:56:44.977235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.977361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.977385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.494 qpair failed and we were unable to recover it. 00:30:40.494 [2024-04-17 06:56:44.977508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.977690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.977715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.494 qpair failed and we were unable to recover it. 00:30:40.494 [2024-04-17 06:56:44.977860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.978008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.978033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.494 qpair failed and we were unable to recover it. 00:30:40.494 [2024-04-17 06:56:44.978167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.978307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.978332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.494 qpair failed and we were unable to recover it. 00:30:40.494 [2024-04-17 06:56:44.978483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.978612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.978636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.494 qpair failed and we were unable to recover it. 00:30:40.494 06:56:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.494 [2024-04-17 06:56:44.978756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 06:56:44 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:40.494 [2024-04-17 06:56:44.978912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.978936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.494 qpair failed and we were unable to recover it. 00:30:40.494 06:56:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.494 [2024-04-17 06:56:44.979068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 06:56:44 -- common/autotest_common.sh@10 -- # set +x 00:30:40.494 [2024-04-17 06:56:44.979195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.979220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.494 qpair failed and we were unable to recover it. 00:30:40.494 [2024-04-17 06:56:44.979358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.979521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.979545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.494 qpair failed and we were unable to recover it. 00:30:40.494 [2024-04-17 06:56:44.979671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.979800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.979825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.494 qpair failed and we were unable to recover it. 00:30:40.494 [2024-04-17 06:56:44.979979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.980114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.980139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.494 qpair failed and we were unable to recover it. 00:30:40.494 [2024-04-17 06:56:44.980302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.980424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.980448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.494 qpair failed and we were unable to recover it. 00:30:40.494 [2024-04-17 06:56:44.980587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.980706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.494 [2024-04-17 06:56:44.980731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.494 qpair failed and we were unable to recover it. 00:30:40.494 [2024-04-17 06:56:44.980858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.495 [2024-04-17 06:56:44.981008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.495 [2024-04-17 06:56:44.981032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.495 qpair failed and we were unable to recover it. 00:30:40.495 [2024-04-17 06:56:44.981152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.495 [2024-04-17 06:56:44.981313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.495 [2024-04-17 06:56:44.981339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.495 qpair failed and we were unable to recover it. 00:30:40.495 [2024-04-17 06:56:44.981490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.495 [2024-04-17 06:56:44.981628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.495 [2024-04-17 06:56:44.981652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.495 qpair failed and we were unable to recover it. 00:30:40.495 [2024-04-17 06:56:44.981810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.495 [2024-04-17 06:56:44.981963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.495 [2024-04-17 06:56:44.981988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.495 qpair failed and we were unable to recover it. 00:30:40.495 [2024-04-17 06:56:44.982140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.495 [2024-04-17 06:56:44.982276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.495 [2024-04-17 06:56:44.982301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.495 qpair failed and we were unable to recover it. 00:30:40.495 [2024-04-17 06:56:44.982426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.495 [2024-04-17 06:56:44.982560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.495 [2024-04-17 06:56:44.982584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73ec000b90 with addr=10.0.0.2, port=4420 00:30:40.495 qpair failed and we were unable to recover it. 00:30:40.495 [2024-04-17 06:56:44.982688] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.495 [2024-04-17 06:56:44.985143] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.495 [2024-04-17 06:56:44.985314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.495 [2024-04-17 06:56:44.985340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.495 [2024-04-17 06:56:44.985355] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.495 [2024-04-17 06:56:44.985367] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.495 [2024-04-17 06:56:44.985400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.495 qpair failed and we were unable to recover it. 00:30:40.495 06:56:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.495 06:56:44 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:40.495 06:56:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.495 06:56:44 -- common/autotest_common.sh@10 -- # set +x 00:30:40.495 06:56:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.495 06:56:44 -- host/target_disconnect.sh@58 -- # wait 123303 00:30:40.495 [2024-04-17 06:56:44.995042] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.495 [2024-04-17 06:56:44.995172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.495 [2024-04-17 06:56:44.995209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.495 [2024-04-17 06:56:44.995223] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.495 [2024-04-17 06:56:44.995235] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.495 [2024-04-17 06:56:44.995264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.495 qpair failed and we were unable to recover it. 00:30:40.495 [2024-04-17 06:56:45.005065] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.495 [2024-04-17 06:56:45.005207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.495 [2024-04-17 06:56:45.005235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.495 [2024-04-17 06:56:45.005249] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.495 [2024-04-17 06:56:45.005261] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.495 [2024-04-17 06:56:45.005289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.495 qpair failed and we were unable to recover it. 00:30:40.495 [2024-04-17 06:56:45.015006] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.495 [2024-04-17 06:56:45.015147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.495 [2024-04-17 06:56:45.015187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.495 [2024-04-17 06:56:45.015204] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.495 [2024-04-17 06:56:45.015216] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.495 [2024-04-17 06:56:45.015244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.495 qpair failed and we were unable to recover it. 00:30:40.495 [2024-04-17 06:56:45.025057] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.495 [2024-04-17 06:56:45.025196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.495 [2024-04-17 06:56:45.025223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.495 [2024-04-17 06:56:45.025237] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.495 [2024-04-17 06:56:45.025250] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.495 [2024-04-17 06:56:45.025278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.495 qpair failed and we were unable to recover it. 00:30:40.495 [2024-04-17 06:56:45.035085] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.495 [2024-04-17 06:56:45.035235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.495 [2024-04-17 06:56:45.035262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.495 [2024-04-17 06:56:45.035276] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.495 [2024-04-17 06:56:45.035288] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.495 [2024-04-17 06:56:45.035317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.495 qpair failed and we were unable to recover it. 00:30:40.495 [2024-04-17 06:56:45.045109] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.495 [2024-04-17 06:56:45.045283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.495 [2024-04-17 06:56:45.045311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.495 [2024-04-17 06:56:45.045325] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.495 [2024-04-17 06:56:45.045337] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.495 [2024-04-17 06:56:45.045365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.495 qpair failed and we were unable to recover it. 00:30:40.495 [2024-04-17 06:56:45.055137] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.495 [2024-04-17 06:56:45.055293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.495 [2024-04-17 06:56:45.055318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.495 [2024-04-17 06:56:45.055332] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.495 [2024-04-17 06:56:45.055345] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.495 [2024-04-17 06:56:45.055373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.495 qpair failed and we were unable to recover it. 00:30:40.754 [2024-04-17 06:56:45.065133] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.755 [2024-04-17 06:56:45.065332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.755 [2024-04-17 06:56:45.065358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.755 [2024-04-17 06:56:45.065377] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.755 [2024-04-17 06:56:45.065390] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.755 [2024-04-17 06:56:45.065419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.755 qpair failed and we were unable to recover it. 00:30:40.755 [2024-04-17 06:56:45.075173] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.755 [2024-04-17 06:56:45.075311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.755 [2024-04-17 06:56:45.075338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.755 [2024-04-17 06:56:45.075352] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.755 [2024-04-17 06:56:45.075364] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.755 [2024-04-17 06:56:45.075392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.755 qpair failed and we were unable to recover it. 00:30:40.755 [2024-04-17 06:56:45.085158] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.755 [2024-04-17 06:56:45.085298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.755 [2024-04-17 06:56:45.085324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.755 [2024-04-17 06:56:45.085338] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.755 [2024-04-17 06:56:45.085351] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.755 [2024-04-17 06:56:45.085379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.755 qpair failed and we were unable to recover it. 00:30:40.755 [2024-04-17 06:56:45.095256] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.755 [2024-04-17 06:56:45.095433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.755 [2024-04-17 06:56:45.095459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.755 [2024-04-17 06:56:45.095483] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.755 [2024-04-17 06:56:45.095495] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.755 [2024-04-17 06:56:45.095523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.755 qpair failed and we were unable to recover it. 00:30:40.755 [2024-04-17 06:56:45.105283] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.755 [2024-04-17 06:56:45.105429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.755 [2024-04-17 06:56:45.105455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.755 [2024-04-17 06:56:45.105480] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.755 [2024-04-17 06:56:45.105492] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.755 [2024-04-17 06:56:45.105520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.755 qpair failed and we were unable to recover it. 00:30:40.755 [2024-04-17 06:56:45.115321] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.755 [2024-04-17 06:56:45.115450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.755 [2024-04-17 06:56:45.115475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.755 [2024-04-17 06:56:45.115490] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.755 [2024-04-17 06:56:45.115502] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.755 [2024-04-17 06:56:45.115530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.755 qpair failed and we were unable to recover it. 00:30:40.755 [2024-04-17 06:56:45.125378] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.755 [2024-04-17 06:56:45.125526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.755 [2024-04-17 06:56:45.125555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.755 [2024-04-17 06:56:45.125570] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.755 [2024-04-17 06:56:45.125582] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.755 [2024-04-17 06:56:45.125612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.755 qpair failed and we were unable to recover it. 00:30:40.755 [2024-04-17 06:56:45.135352] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.755 [2024-04-17 06:56:45.135485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.755 [2024-04-17 06:56:45.135512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.755 [2024-04-17 06:56:45.135526] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.755 [2024-04-17 06:56:45.135538] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.755 [2024-04-17 06:56:45.135566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.755 qpair failed and we were unable to recover it. 00:30:40.755 [2024-04-17 06:56:45.145451] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.755 [2024-04-17 06:56:45.145593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.755 [2024-04-17 06:56:45.145619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.755 [2024-04-17 06:56:45.145633] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.755 [2024-04-17 06:56:45.145644] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.755 [2024-04-17 06:56:45.145673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.755 qpair failed and we were unable to recover it. 00:30:40.755 [2024-04-17 06:56:45.155537] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.755 [2024-04-17 06:56:45.155697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.755 [2024-04-17 06:56:45.155728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.755 [2024-04-17 06:56:45.155743] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.755 [2024-04-17 06:56:45.155755] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.755 [2024-04-17 06:56:45.155783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.755 qpair failed and we were unable to recover it. 00:30:40.755 [2024-04-17 06:56:45.165455] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.755 [2024-04-17 06:56:45.165598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.755 [2024-04-17 06:56:45.165624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.755 [2024-04-17 06:56:45.165638] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.755 [2024-04-17 06:56:45.165650] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.755 [2024-04-17 06:56:45.165678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.755 qpair failed and we were unable to recover it. 00:30:40.755 [2024-04-17 06:56:45.175495] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.755 [2024-04-17 06:56:45.175642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.755 [2024-04-17 06:56:45.175668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.755 [2024-04-17 06:56:45.175682] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.755 [2024-04-17 06:56:45.175694] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.755 [2024-04-17 06:56:45.175721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.755 qpair failed and we were unable to recover it. 00:30:40.755 [2024-04-17 06:56:45.185511] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.755 [2024-04-17 06:56:45.185694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.755 [2024-04-17 06:56:45.185720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.756 [2024-04-17 06:56:45.185733] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.756 [2024-04-17 06:56:45.185745] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.756 [2024-04-17 06:56:45.185773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.756 qpair failed and we were unable to recover it. 00:30:40.756 [2024-04-17 06:56:45.195505] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.756 [2024-04-17 06:56:45.195637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.756 [2024-04-17 06:56:45.195662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.756 [2024-04-17 06:56:45.195677] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.756 [2024-04-17 06:56:45.195689] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.756 [2024-04-17 06:56:45.195722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.756 qpair failed and we were unable to recover it. 00:30:40.756 [2024-04-17 06:56:45.205571] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.756 [2024-04-17 06:56:45.205748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.756 [2024-04-17 06:56:45.205773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.756 [2024-04-17 06:56:45.205787] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.756 [2024-04-17 06:56:45.205799] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.756 [2024-04-17 06:56:45.205827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.756 qpair failed and we were unable to recover it. 00:30:40.756 [2024-04-17 06:56:45.215676] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.756 [2024-04-17 06:56:45.215820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.756 [2024-04-17 06:56:45.215846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.756 [2024-04-17 06:56:45.215860] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.756 [2024-04-17 06:56:45.215872] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.756 [2024-04-17 06:56:45.215900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.756 qpair failed and we were unable to recover it. 00:30:40.756 [2024-04-17 06:56:45.225732] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.756 [2024-04-17 06:56:45.225868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.756 [2024-04-17 06:56:45.225896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.756 [2024-04-17 06:56:45.225914] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.756 [2024-04-17 06:56:45.225926] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.756 [2024-04-17 06:56:45.225955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.756 qpair failed and we were unable to recover it. 00:30:40.756 [2024-04-17 06:56:45.235663] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.756 [2024-04-17 06:56:45.235794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.756 [2024-04-17 06:56:45.235820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.756 [2024-04-17 06:56:45.235834] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.756 [2024-04-17 06:56:45.235846] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.756 [2024-04-17 06:56:45.235874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.756 qpair failed and we were unable to recover it. 00:30:40.756 [2024-04-17 06:56:45.245747] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.756 [2024-04-17 06:56:45.245887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.756 [2024-04-17 06:56:45.245917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.756 [2024-04-17 06:56:45.245932] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.756 [2024-04-17 06:56:45.245944] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.756 [2024-04-17 06:56:45.245973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.756 qpair failed and we were unable to recover it. 00:30:40.756 [2024-04-17 06:56:45.255698] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.756 [2024-04-17 06:56:45.255831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.756 [2024-04-17 06:56:45.255856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.756 [2024-04-17 06:56:45.255870] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.756 [2024-04-17 06:56:45.255882] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.756 [2024-04-17 06:56:45.255909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.756 qpair failed and we were unable to recover it. 00:30:40.756 [2024-04-17 06:56:45.265755] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.756 [2024-04-17 06:56:45.265897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.756 [2024-04-17 06:56:45.265923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.756 [2024-04-17 06:56:45.265937] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.756 [2024-04-17 06:56:45.265949] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.756 [2024-04-17 06:56:45.265977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.756 qpair failed and we were unable to recover it. 00:30:40.756 [2024-04-17 06:56:45.275761] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.756 [2024-04-17 06:56:45.275940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.756 [2024-04-17 06:56:45.275966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.756 [2024-04-17 06:56:45.275980] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.756 [2024-04-17 06:56:45.275992] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.756 [2024-04-17 06:56:45.276021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.756 qpair failed and we were unable to recover it. 00:30:40.756 [2024-04-17 06:56:45.285840] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.756 [2024-04-17 06:56:45.285991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.756 [2024-04-17 06:56:45.286017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.756 [2024-04-17 06:56:45.286031] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.756 [2024-04-17 06:56:45.286043] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.756 [2024-04-17 06:56:45.286076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.756 qpair failed and we were unable to recover it. 00:30:40.756 [2024-04-17 06:56:45.295830] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.756 [2024-04-17 06:56:45.295994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.756 [2024-04-17 06:56:45.296020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.756 [2024-04-17 06:56:45.296034] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.756 [2024-04-17 06:56:45.296046] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.756 [2024-04-17 06:56:45.296074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.756 qpair failed and we were unable to recover it. 00:30:40.756 [2024-04-17 06:56:45.305858] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.756 [2024-04-17 06:56:45.305988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.756 [2024-04-17 06:56:45.306014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.756 [2024-04-17 06:56:45.306027] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.756 [2024-04-17 06:56:45.306039] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.756 [2024-04-17 06:56:45.306067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.756 qpair failed and we were unable to recover it. 00:30:40.756 [2024-04-17 06:56:45.315879] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.757 [2024-04-17 06:56:45.316042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.757 [2024-04-17 06:56:45.316068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.757 [2024-04-17 06:56:45.316082] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.757 [2024-04-17 06:56:45.316094] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.757 [2024-04-17 06:56:45.316122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.757 qpair failed and we were unable to recover it. 00:30:40.757 [2024-04-17 06:56:45.325896] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.757 [2024-04-17 06:56:45.326030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.757 [2024-04-17 06:56:45.326056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.757 [2024-04-17 06:56:45.326070] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.757 [2024-04-17 06:56:45.326082] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.757 [2024-04-17 06:56:45.326109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.757 qpair failed and we were unable to recover it. 00:30:40.757 [2024-04-17 06:56:45.336020] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.757 [2024-04-17 06:56:45.336157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.757 [2024-04-17 06:56:45.336195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.757 [2024-04-17 06:56:45.336210] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.757 [2024-04-17 06:56:45.336222] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.757 [2024-04-17 06:56:45.336251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.757 qpair failed and we were unable to recover it. 00:30:40.757 [2024-04-17 06:56:45.346037] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.757 [2024-04-17 06:56:45.346199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.757 [2024-04-17 06:56:45.346226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.757 [2024-04-17 06:56:45.346240] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.757 [2024-04-17 06:56:45.346251] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.757 [2024-04-17 06:56:45.346280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.757 qpair failed and we were unable to recover it. 00:30:40.757 [2024-04-17 06:56:45.356049] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.757 [2024-04-17 06:56:45.356184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.757 [2024-04-17 06:56:45.356210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.757 [2024-04-17 06:56:45.356225] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.757 [2024-04-17 06:56:45.356237] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:40.757 [2024-04-17 06:56:45.356265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.757 qpair failed and we were unable to recover it. 00:30:41.016 [2024-04-17 06:56:45.365990] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.016 [2024-04-17 06:56:45.366116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.016 [2024-04-17 06:56:45.366141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.016 [2024-04-17 06:56:45.366155] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.016 [2024-04-17 06:56:45.366167] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.016 [2024-04-17 06:56:45.366204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.016 qpair failed and we were unable to recover it. 00:30:41.016 [2024-04-17 06:56:45.376072] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.016 [2024-04-17 06:56:45.376247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.016 [2024-04-17 06:56:45.376272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.016 [2024-04-17 06:56:45.376286] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.016 [2024-04-17 06:56:45.376303] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.016 [2024-04-17 06:56:45.376333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.016 qpair failed and we were unable to recover it. 00:30:41.016 [2024-04-17 06:56:45.386181] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.017 [2024-04-17 06:56:45.386356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.017 [2024-04-17 06:56:45.386382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.017 [2024-04-17 06:56:45.386396] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.017 [2024-04-17 06:56:45.386408] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.017 [2024-04-17 06:56:45.386436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.017 qpair failed and we were unable to recover it. 00:30:41.017 [2024-04-17 06:56:45.396083] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.017 [2024-04-17 06:56:45.396221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.017 [2024-04-17 06:56:45.396247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.017 [2024-04-17 06:56:45.396261] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.017 [2024-04-17 06:56:45.396273] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.017 [2024-04-17 06:56:45.396302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.017 qpair failed and we were unable to recover it. 00:30:41.017 [2024-04-17 06:56:45.406089] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.017 [2024-04-17 06:56:45.406229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.017 [2024-04-17 06:56:45.406256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.017 [2024-04-17 06:56:45.406270] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.017 [2024-04-17 06:56:45.406282] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.017 [2024-04-17 06:56:45.406310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.017 qpair failed and we were unable to recover it. 00:30:41.017 [2024-04-17 06:56:45.416140] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.017 [2024-04-17 06:56:45.416287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.017 [2024-04-17 06:56:45.416312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.017 [2024-04-17 06:56:45.416326] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.017 [2024-04-17 06:56:45.416338] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.017 [2024-04-17 06:56:45.416366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.017 qpair failed and we were unable to recover it. 00:30:41.017 [2024-04-17 06:56:45.426195] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.017 [2024-04-17 06:56:45.426338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.017 [2024-04-17 06:56:45.426364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.017 [2024-04-17 06:56:45.426378] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.017 [2024-04-17 06:56:45.426390] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.017 [2024-04-17 06:56:45.426417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.017 qpair failed and we were unable to recover it. 00:30:41.017 [2024-04-17 06:56:45.436197] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.017 [2024-04-17 06:56:45.436332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.017 [2024-04-17 06:56:45.436357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.017 [2024-04-17 06:56:45.436371] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.017 [2024-04-17 06:56:45.436383] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.017 [2024-04-17 06:56:45.436411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.017 qpair failed and we were unable to recover it. 00:30:41.017 [2024-04-17 06:56:45.446229] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.017 [2024-04-17 06:56:45.446359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.017 [2024-04-17 06:56:45.446384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.017 [2024-04-17 06:56:45.446398] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.017 [2024-04-17 06:56:45.446410] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.017 [2024-04-17 06:56:45.446438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.017 qpair failed and we were unable to recover it. 00:30:41.017 [2024-04-17 06:56:45.456295] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.017 [2024-04-17 06:56:45.456428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.017 [2024-04-17 06:56:45.456454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.017 [2024-04-17 06:56:45.456468] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.017 [2024-04-17 06:56:45.456480] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.017 [2024-04-17 06:56:45.456508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.017 qpair failed and we were unable to recover it. 00:30:41.017 [2024-04-17 06:56:45.466276] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.017 [2024-04-17 06:56:45.466419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.017 [2024-04-17 06:56:45.466444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.017 [2024-04-17 06:56:45.466464] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.017 [2024-04-17 06:56:45.466477] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.017 [2024-04-17 06:56:45.466505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.017 qpair failed and we were unable to recover it. 00:30:41.017 [2024-04-17 06:56:45.476317] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.017 [2024-04-17 06:56:45.476453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.017 [2024-04-17 06:56:45.476479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.017 [2024-04-17 06:56:45.476493] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.017 [2024-04-17 06:56:45.476505] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.017 [2024-04-17 06:56:45.476533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.017 qpair failed and we were unable to recover it. 00:30:41.017 [2024-04-17 06:56:45.486355] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.017 [2024-04-17 06:56:45.486484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.017 [2024-04-17 06:56:45.486509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.017 [2024-04-17 06:56:45.486523] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.017 [2024-04-17 06:56:45.486535] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.017 [2024-04-17 06:56:45.486563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.017 qpair failed and we were unable to recover it. 00:30:41.017 [2024-04-17 06:56:45.496402] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.017 [2024-04-17 06:56:45.496583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.017 [2024-04-17 06:56:45.496608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.017 [2024-04-17 06:56:45.496623] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.017 [2024-04-17 06:56:45.496635] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.017 [2024-04-17 06:56:45.496663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.017 qpair failed and we were unable to recover it. 00:30:41.017 [2024-04-17 06:56:45.506385] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.017 [2024-04-17 06:56:45.506518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.017 [2024-04-17 06:56:45.506543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.017 [2024-04-17 06:56:45.506557] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.017 [2024-04-17 06:56:45.506569] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.017 [2024-04-17 06:56:45.506598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.017 qpair failed and we were unable to recover it. 00:30:41.017 [2024-04-17 06:56:45.516408] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.017 [2024-04-17 06:56:45.516541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.017 [2024-04-17 06:56:45.516566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.017 [2024-04-17 06:56:45.516580] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.018 [2024-04-17 06:56:45.516592] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.018 [2024-04-17 06:56:45.516619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.018 qpair failed and we were unable to recover it. 00:30:41.018 [2024-04-17 06:56:45.526478] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.018 [2024-04-17 06:56:45.526656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.018 [2024-04-17 06:56:45.526681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.018 [2024-04-17 06:56:45.526695] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.018 [2024-04-17 06:56:45.526706] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.018 [2024-04-17 06:56:45.526735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.018 qpair failed and we were unable to recover it. 00:30:41.018 [2024-04-17 06:56:45.536480] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.018 [2024-04-17 06:56:45.536616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.018 [2024-04-17 06:56:45.536641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.018 [2024-04-17 06:56:45.536655] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.018 [2024-04-17 06:56:45.536667] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.018 [2024-04-17 06:56:45.536695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.018 qpair failed and we were unable to recover it. 00:30:41.018 [2024-04-17 06:56:45.546508] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.018 [2024-04-17 06:56:45.546667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.018 [2024-04-17 06:56:45.546694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.018 [2024-04-17 06:56:45.546708] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.018 [2024-04-17 06:56:45.546724] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.018 [2024-04-17 06:56:45.546754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.018 qpair failed and we were unable to recover it. 00:30:41.018 [2024-04-17 06:56:45.556565] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.018 [2024-04-17 06:56:45.556695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.018 [2024-04-17 06:56:45.556720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.018 [2024-04-17 06:56:45.556742] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.018 [2024-04-17 06:56:45.556755] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.018 [2024-04-17 06:56:45.556785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.018 qpair failed and we were unable to recover it. 00:30:41.018 [2024-04-17 06:56:45.566567] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.018 [2024-04-17 06:56:45.566694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.018 [2024-04-17 06:56:45.566721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.018 [2024-04-17 06:56:45.566735] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.018 [2024-04-17 06:56:45.566747] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.018 [2024-04-17 06:56:45.566787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.018 qpair failed and we were unable to recover it. 00:30:41.018 [2024-04-17 06:56:45.576592] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.018 [2024-04-17 06:56:45.576739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.018 [2024-04-17 06:56:45.576765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.018 [2024-04-17 06:56:45.576779] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.018 [2024-04-17 06:56:45.576791] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.018 [2024-04-17 06:56:45.576819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.018 qpair failed and we were unable to recover it. 00:30:41.018 [2024-04-17 06:56:45.586619] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.018 [2024-04-17 06:56:45.586759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.018 [2024-04-17 06:56:45.586786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.018 [2024-04-17 06:56:45.586801] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.018 [2024-04-17 06:56:45.586813] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.018 [2024-04-17 06:56:45.586854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.018 qpair failed and we were unable to recover it. 00:30:41.018 [2024-04-17 06:56:45.596678] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.018 [2024-04-17 06:56:45.596826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.018 [2024-04-17 06:56:45.596853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.018 [2024-04-17 06:56:45.596867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.018 [2024-04-17 06:56:45.596879] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.018 [2024-04-17 06:56:45.596907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.018 qpair failed and we were unable to recover it. 00:30:41.018 [2024-04-17 06:56:45.606728] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.018 [2024-04-17 06:56:45.606867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.018 [2024-04-17 06:56:45.606893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.018 [2024-04-17 06:56:45.606908] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.018 [2024-04-17 06:56:45.606920] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.018 [2024-04-17 06:56:45.606963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.018 qpair failed and we were unable to recover it. 00:30:41.018 [2024-04-17 06:56:45.616735] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.018 [2024-04-17 06:56:45.616904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.018 [2024-04-17 06:56:45.616928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.018 [2024-04-17 06:56:45.616942] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.018 [2024-04-17 06:56:45.616955] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.018 [2024-04-17 06:56:45.616997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.018 qpair failed and we were unable to recover it. 00:30:41.277 [2024-04-17 06:56:45.626753] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.277 [2024-04-17 06:56:45.626900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.277 [2024-04-17 06:56:45.626926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.277 [2024-04-17 06:56:45.626941] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.277 [2024-04-17 06:56:45.626953] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.277 [2024-04-17 06:56:45.626982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.277 qpair failed and we were unable to recover it. 00:30:41.277 [2024-04-17 06:56:45.636783] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.277 [2024-04-17 06:56:45.636918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.277 [2024-04-17 06:56:45.636945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.277 [2024-04-17 06:56:45.636959] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.277 [2024-04-17 06:56:45.636972] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.277 [2024-04-17 06:56:45.637027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.277 qpair failed and we were unable to recover it. 00:30:41.277 [2024-04-17 06:56:45.646789] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.277 [2024-04-17 06:56:45.646934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.277 [2024-04-17 06:56:45.646965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.278 [2024-04-17 06:56:45.646981] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.278 [2024-04-17 06:56:45.646994] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.278 [2024-04-17 06:56:45.647038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.278 qpair failed and we were unable to recover it. 00:30:41.278 [2024-04-17 06:56:45.656825] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.278 [2024-04-17 06:56:45.656969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.278 [2024-04-17 06:56:45.656996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.278 [2024-04-17 06:56:45.657011] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.278 [2024-04-17 06:56:45.657026] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.278 [2024-04-17 06:56:45.657066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.278 qpair failed and we were unable to recover it. 00:30:41.278 [2024-04-17 06:56:45.666822] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.278 [2024-04-17 06:56:45.666964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.278 [2024-04-17 06:56:45.666991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.278 [2024-04-17 06:56:45.667005] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.278 [2024-04-17 06:56:45.667017] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.278 [2024-04-17 06:56:45.667046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.278 qpair failed and we were unable to recover it. 00:30:41.278 [2024-04-17 06:56:45.676870] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.278 [2024-04-17 06:56:45.677009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.278 [2024-04-17 06:56:45.677034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.278 [2024-04-17 06:56:45.677049] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.278 [2024-04-17 06:56:45.677062] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.278 [2024-04-17 06:56:45.677090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.278 qpair failed and we were unable to recover it. 00:30:41.278 [2024-04-17 06:56:45.686894] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.278 [2024-04-17 06:56:45.687057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.278 [2024-04-17 06:56:45.687084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.278 [2024-04-17 06:56:45.687099] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.278 [2024-04-17 06:56:45.687129] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.278 [2024-04-17 06:56:45.687199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.278 qpair failed and we were unable to recover it. 00:30:41.278 [2024-04-17 06:56:45.696941] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.278 [2024-04-17 06:56:45.697122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.278 [2024-04-17 06:56:45.697148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.278 [2024-04-17 06:56:45.697163] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.278 [2024-04-17 06:56:45.697182] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.278 [2024-04-17 06:56:45.697213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.278 qpair failed and we were unable to recover it. 00:30:41.278 [2024-04-17 06:56:45.706937] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.278 [2024-04-17 06:56:45.707089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.278 [2024-04-17 06:56:45.707115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.278 [2024-04-17 06:56:45.707130] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.278 [2024-04-17 06:56:45.707142] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.278 [2024-04-17 06:56:45.707171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.278 qpair failed and we were unable to recover it. 00:30:41.278 [2024-04-17 06:56:45.716966] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.278 [2024-04-17 06:56:45.717102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.278 [2024-04-17 06:56:45.717128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.278 [2024-04-17 06:56:45.717143] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.278 [2024-04-17 06:56:45.717155] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.278 [2024-04-17 06:56:45.717190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.278 qpair failed and we were unable to recover it. 00:30:41.278 [2024-04-17 06:56:45.727013] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.278 [2024-04-17 06:56:45.727140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.278 [2024-04-17 06:56:45.727181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.278 [2024-04-17 06:56:45.727199] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.278 [2024-04-17 06:56:45.727212] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.278 [2024-04-17 06:56:45.727253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.278 qpair failed and we were unable to recover it. 00:30:41.278 [2024-04-17 06:56:45.737087] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.278 [2024-04-17 06:56:45.737259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.278 [2024-04-17 06:56:45.737291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.278 [2024-04-17 06:56:45.737307] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.278 [2024-04-17 06:56:45.737319] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.278 [2024-04-17 06:56:45.737360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.278 qpair failed and we were unable to recover it. 00:30:41.278 [2024-04-17 06:56:45.747062] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.278 [2024-04-17 06:56:45.747220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.278 [2024-04-17 06:56:45.747247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.278 [2024-04-17 06:56:45.747261] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.278 [2024-04-17 06:56:45.747274] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.278 [2024-04-17 06:56:45.747315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.278 qpair failed and we were unable to recover it. 00:30:41.278 [2024-04-17 06:56:45.757075] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.278 [2024-04-17 06:56:45.757204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.278 [2024-04-17 06:56:45.757230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.278 [2024-04-17 06:56:45.757245] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.278 [2024-04-17 06:56:45.757258] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.278 [2024-04-17 06:56:45.757287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.278 qpair failed and we were unable to recover it. 00:30:41.278 [2024-04-17 06:56:45.767103] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.278 [2024-04-17 06:56:45.767285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.278 [2024-04-17 06:56:45.767312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.278 [2024-04-17 06:56:45.767326] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.278 [2024-04-17 06:56:45.767339] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.278 [2024-04-17 06:56:45.767368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.278 qpair failed and we were unable to recover it. 00:30:41.278 [2024-04-17 06:56:45.777186] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.278 [2024-04-17 06:56:45.777368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.278 [2024-04-17 06:56:45.777394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.278 [2024-04-17 06:56:45.777410] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.278 [2024-04-17 06:56:45.777427] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.279 [2024-04-17 06:56:45.777457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.279 qpair failed and we were unable to recover it. 00:30:41.279 [2024-04-17 06:56:45.787169] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.279 [2024-04-17 06:56:45.787321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.279 [2024-04-17 06:56:45.787346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.279 [2024-04-17 06:56:45.787361] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.279 [2024-04-17 06:56:45.787374] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.279 [2024-04-17 06:56:45.787416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.279 qpair failed and we were unable to recover it. 00:30:41.279 [2024-04-17 06:56:45.797366] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.279 [2024-04-17 06:56:45.797516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.279 [2024-04-17 06:56:45.797557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.279 [2024-04-17 06:56:45.797572] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.279 [2024-04-17 06:56:45.797584] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.279 [2024-04-17 06:56:45.797612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.279 qpair failed and we were unable to recover it. 00:30:41.279 [2024-04-17 06:56:45.807276] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.279 [2024-04-17 06:56:45.807421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.279 [2024-04-17 06:56:45.807447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.279 [2024-04-17 06:56:45.807462] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.279 [2024-04-17 06:56:45.807475] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.279 [2024-04-17 06:56:45.807503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.279 qpair failed and we were unable to recover it. 00:30:41.279 [2024-04-17 06:56:45.817323] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.279 [2024-04-17 06:56:45.817463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.279 [2024-04-17 06:56:45.817489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.279 [2024-04-17 06:56:45.817504] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.279 [2024-04-17 06:56:45.817516] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.279 [2024-04-17 06:56:45.817557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.279 qpair failed and we were unable to recover it. 00:30:41.279 [2024-04-17 06:56:45.827343] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.279 [2024-04-17 06:56:45.827493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.279 [2024-04-17 06:56:45.827520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.279 [2024-04-17 06:56:45.827535] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.279 [2024-04-17 06:56:45.827547] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.279 [2024-04-17 06:56:45.827576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.279 qpair failed and we were unable to recover it. 00:30:41.279 [2024-04-17 06:56:45.837321] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.279 [2024-04-17 06:56:45.837466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.279 [2024-04-17 06:56:45.837492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.279 [2024-04-17 06:56:45.837506] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.279 [2024-04-17 06:56:45.837519] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.279 [2024-04-17 06:56:45.837562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.279 qpair failed and we were unable to recover it. 00:30:41.279 [2024-04-17 06:56:45.847378] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.279 [2024-04-17 06:56:45.847509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.279 [2024-04-17 06:56:45.847536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.279 [2024-04-17 06:56:45.847551] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.279 [2024-04-17 06:56:45.847564] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.279 [2024-04-17 06:56:45.847592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.279 qpair failed and we were unable to recover it. 00:30:41.279 [2024-04-17 06:56:45.857384] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.279 [2024-04-17 06:56:45.857517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.279 [2024-04-17 06:56:45.857543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.279 [2024-04-17 06:56:45.857558] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.279 [2024-04-17 06:56:45.857571] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.279 [2024-04-17 06:56:45.857599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.279 qpair failed and we were unable to recover it. 00:30:41.279 [2024-04-17 06:56:45.867408] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.279 [2024-04-17 06:56:45.867539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.279 [2024-04-17 06:56:45.867565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.279 [2024-04-17 06:56:45.867586] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.279 [2024-04-17 06:56:45.867599] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.279 [2024-04-17 06:56:45.867628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.279 qpair failed and we were unable to recover it. 00:30:41.279 [2024-04-17 06:56:45.877440] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.279 [2024-04-17 06:56:45.877574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.279 [2024-04-17 06:56:45.877600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.279 [2024-04-17 06:56:45.877616] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.279 [2024-04-17 06:56:45.877630] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.279 [2024-04-17 06:56:45.877673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.279 qpair failed and we were unable to recover it. 00:30:41.538 [2024-04-17 06:56:45.887515] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.538 [2024-04-17 06:56:45.887697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.538 [2024-04-17 06:56:45.887737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.538 [2024-04-17 06:56:45.887752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.538 [2024-04-17 06:56:45.887764] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.538 [2024-04-17 06:56:45.887808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.538 qpair failed and we were unable to recover it. 00:30:41.538 [2024-04-17 06:56:45.897487] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.538 [2024-04-17 06:56:45.897635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.538 [2024-04-17 06:56:45.897659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.538 [2024-04-17 06:56:45.897673] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.538 [2024-04-17 06:56:45.897686] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.538 [2024-04-17 06:56:45.897715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.538 qpair failed and we were unable to recover it. 00:30:41.538 [2024-04-17 06:56:45.907503] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.538 [2024-04-17 06:56:45.907632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.538 [2024-04-17 06:56:45.907656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.538 [2024-04-17 06:56:45.907671] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.538 [2024-04-17 06:56:45.907683] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.538 [2024-04-17 06:56:45.907712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.538 qpair failed and we were unable to recover it. 00:30:41.538 [2024-04-17 06:56:45.917621] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.538 [2024-04-17 06:56:45.917753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.538 [2024-04-17 06:56:45.917779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.538 [2024-04-17 06:56:45.917793] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.538 [2024-04-17 06:56:45.917806] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.538 [2024-04-17 06:56:45.917847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.538 qpair failed and we were unable to recover it. 00:30:41.538 [2024-04-17 06:56:45.927602] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.538 [2024-04-17 06:56:45.927739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.538 [2024-04-17 06:56:45.927765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.538 [2024-04-17 06:56:45.927779] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.538 [2024-04-17 06:56:45.927792] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.539 [2024-04-17 06:56:45.927833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.539 qpair failed and we were unable to recover it. 00:30:41.539 [2024-04-17 06:56:45.937608] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.539 [2024-04-17 06:56:45.937739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.539 [2024-04-17 06:56:45.937764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.539 [2024-04-17 06:56:45.937778] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.539 [2024-04-17 06:56:45.937791] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.539 [2024-04-17 06:56:45.937819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.539 qpair failed and we were unable to recover it. 00:30:41.539 [2024-04-17 06:56:45.947645] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.539 [2024-04-17 06:56:45.947777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.539 [2024-04-17 06:56:45.947805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.539 [2024-04-17 06:56:45.947820] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.539 [2024-04-17 06:56:45.947833] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.539 [2024-04-17 06:56:45.947873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.539 qpair failed and we were unable to recover it. 00:30:41.539 [2024-04-17 06:56:45.957684] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.539 [2024-04-17 06:56:45.957833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.539 [2024-04-17 06:56:45.957858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.539 [2024-04-17 06:56:45.957878] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.539 [2024-04-17 06:56:45.957892] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.539 [2024-04-17 06:56:45.957921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.539 qpair failed and we were unable to recover it. 00:30:41.539 [2024-04-17 06:56:45.967706] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.539 [2024-04-17 06:56:45.967843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.539 [2024-04-17 06:56:45.967868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.539 [2024-04-17 06:56:45.967883] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.539 [2024-04-17 06:56:45.967895] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.539 [2024-04-17 06:56:45.967924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.539 qpair failed and we were unable to recover it. 00:30:41.539 [2024-04-17 06:56:45.977725] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.539 [2024-04-17 06:56:45.977865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.539 [2024-04-17 06:56:45.977890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.539 [2024-04-17 06:56:45.977905] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.539 [2024-04-17 06:56:45.977918] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.539 [2024-04-17 06:56:45.977946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.539 qpair failed and we were unable to recover it. 00:30:41.539 [2024-04-17 06:56:45.987800] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.539 [2024-04-17 06:56:45.987963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.539 [2024-04-17 06:56:45.987988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.539 [2024-04-17 06:56:45.988002] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.539 [2024-04-17 06:56:45.988015] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.539 [2024-04-17 06:56:45.988044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.539 qpair failed and we were unable to recover it. 00:30:41.539 [2024-04-17 06:56:45.997788] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.539 [2024-04-17 06:56:45.997931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.539 [2024-04-17 06:56:45.997957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.539 [2024-04-17 06:56:45.997971] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.539 [2024-04-17 06:56:45.997984] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.539 [2024-04-17 06:56:45.998038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.539 qpair failed and we were unable to recover it. 00:30:41.539 [2024-04-17 06:56:46.007852] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.539 [2024-04-17 06:56:46.008013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.539 [2024-04-17 06:56:46.008038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.539 [2024-04-17 06:56:46.008067] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.539 [2024-04-17 06:56:46.008080] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.539 [2024-04-17 06:56:46.008109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.539 qpair failed and we were unable to recover it. 00:30:41.539 [2024-04-17 06:56:46.017879] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.539 [2024-04-17 06:56:46.018053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.539 [2024-04-17 06:56:46.018078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.539 [2024-04-17 06:56:46.018094] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.539 [2024-04-17 06:56:46.018107] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.539 [2024-04-17 06:56:46.018136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.539 qpair failed and we were unable to recover it. 00:30:41.539 [2024-04-17 06:56:46.027899] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.539 [2024-04-17 06:56:46.028061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.539 [2024-04-17 06:56:46.028086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.539 [2024-04-17 06:56:46.028100] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.539 [2024-04-17 06:56:46.028113] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.539 [2024-04-17 06:56:46.028142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.539 qpair failed and we were unable to recover it. 00:30:41.539 [2024-04-17 06:56:46.037928] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.539 [2024-04-17 06:56:46.038105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.539 [2024-04-17 06:56:46.038129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.539 [2024-04-17 06:56:46.038144] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.539 [2024-04-17 06:56:46.038157] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.539 [2024-04-17 06:56:46.038196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.539 qpair failed and we were unable to recover it. 00:30:41.539 [2024-04-17 06:56:46.047911] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.539 [2024-04-17 06:56:46.048057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.539 [2024-04-17 06:56:46.048087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.539 [2024-04-17 06:56:46.048105] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.539 [2024-04-17 06:56:46.048118] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.539 [2024-04-17 06:56:46.048147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.539 qpair failed and we were unable to recover it. 00:30:41.539 [2024-04-17 06:56:46.057995] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.539 [2024-04-17 06:56:46.058137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.539 [2024-04-17 06:56:46.058161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.539 [2024-04-17 06:56:46.058184] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.539 [2024-04-17 06:56:46.058199] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.539 [2024-04-17 06:56:46.058228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.539 qpair failed and we were unable to recover it. 00:30:41.539 [2024-04-17 06:56:46.068003] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.540 [2024-04-17 06:56:46.068134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.540 [2024-04-17 06:56:46.068159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.540 [2024-04-17 06:56:46.068173] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.540 [2024-04-17 06:56:46.068194] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.540 [2024-04-17 06:56:46.068224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.540 qpair failed and we were unable to recover it. 00:30:41.540 [2024-04-17 06:56:46.078058] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.540 [2024-04-17 06:56:46.078218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.540 [2024-04-17 06:56:46.078244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.540 [2024-04-17 06:56:46.078258] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.540 [2024-04-17 06:56:46.078271] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.540 [2024-04-17 06:56:46.078300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.540 qpair failed and we were unable to recover it. 00:30:41.540 [2024-04-17 06:56:46.088058] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.540 [2024-04-17 06:56:46.088196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.540 [2024-04-17 06:56:46.088222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.540 [2024-04-17 06:56:46.088237] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.540 [2024-04-17 06:56:46.088249] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.540 [2024-04-17 06:56:46.088284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.540 qpair failed and we were unable to recover it. 00:30:41.540 [2024-04-17 06:56:46.098100] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.540 [2024-04-17 06:56:46.098241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.540 [2024-04-17 06:56:46.098266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.540 [2024-04-17 06:56:46.098281] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.540 [2024-04-17 06:56:46.098293] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.540 [2024-04-17 06:56:46.098323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.540 qpair failed and we were unable to recover it. 00:30:41.540 [2024-04-17 06:56:46.108130] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.540 [2024-04-17 06:56:46.108313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.540 [2024-04-17 06:56:46.108339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.540 [2024-04-17 06:56:46.108353] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.540 [2024-04-17 06:56:46.108366] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.540 [2024-04-17 06:56:46.108396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.540 qpair failed and we were unable to recover it. 00:30:41.540 [2024-04-17 06:56:46.118133] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.540 [2024-04-17 06:56:46.118320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.540 [2024-04-17 06:56:46.118345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.540 [2024-04-17 06:56:46.118361] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.540 [2024-04-17 06:56:46.118374] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.540 [2024-04-17 06:56:46.118403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.540 qpair failed and we were unable to recover it. 00:30:41.540 [2024-04-17 06:56:46.128183] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.540 [2024-04-17 06:56:46.128328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.540 [2024-04-17 06:56:46.128354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.540 [2024-04-17 06:56:46.128368] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.540 [2024-04-17 06:56:46.128381] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.540 [2024-04-17 06:56:46.128409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.540 qpair failed and we were unable to recover it. 00:30:41.540 [2024-04-17 06:56:46.138202] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.540 [2024-04-17 06:56:46.138374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.540 [2024-04-17 06:56:46.138404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.540 [2024-04-17 06:56:46.138419] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.540 [2024-04-17 06:56:46.138432] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.540 [2024-04-17 06:56:46.138461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.540 qpair failed and we were unable to recover it. 00:30:41.800 [2024-04-17 06:56:46.148237] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.800 [2024-04-17 06:56:46.148368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.800 [2024-04-17 06:56:46.148395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.800 [2024-04-17 06:56:46.148410] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.800 [2024-04-17 06:56:46.148423] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.800 [2024-04-17 06:56:46.148452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.800 qpair failed and we were unable to recover it. 00:30:41.800 [2024-04-17 06:56:46.158250] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.800 [2024-04-17 06:56:46.158376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.800 [2024-04-17 06:56:46.158403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.800 [2024-04-17 06:56:46.158418] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.800 [2024-04-17 06:56:46.158430] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.800 [2024-04-17 06:56:46.158459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.800 qpair failed and we were unable to recover it. 00:30:41.800 [2024-04-17 06:56:46.168299] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.800 [2024-04-17 06:56:46.168427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.800 [2024-04-17 06:56:46.168453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.800 [2024-04-17 06:56:46.168469] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.800 [2024-04-17 06:56:46.168481] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.800 [2024-04-17 06:56:46.168511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.800 qpair failed and we were unable to recover it. 00:30:41.800 [2024-04-17 06:56:46.178369] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.800 [2024-04-17 06:56:46.178536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.800 [2024-04-17 06:56:46.178562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.800 [2024-04-17 06:56:46.178578] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.800 [2024-04-17 06:56:46.178599] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.800 [2024-04-17 06:56:46.178629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.800 qpair failed and we were unable to recover it. 00:30:41.800 [2024-04-17 06:56:46.188410] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.800 [2024-04-17 06:56:46.188608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.800 [2024-04-17 06:56:46.188638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.800 [2024-04-17 06:56:46.188654] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.800 [2024-04-17 06:56:46.188667] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.800 [2024-04-17 06:56:46.188698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.800 qpair failed and we were unable to recover it. 00:30:41.800 [2024-04-17 06:56:46.198365] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.800 [2024-04-17 06:56:46.198495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.800 [2024-04-17 06:56:46.198521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.800 [2024-04-17 06:56:46.198535] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.800 [2024-04-17 06:56:46.198547] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.800 [2024-04-17 06:56:46.198577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.800 qpair failed and we were unable to recover it. 00:30:41.800 [2024-04-17 06:56:46.208421] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.800 [2024-04-17 06:56:46.208564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.800 [2024-04-17 06:56:46.208589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.800 [2024-04-17 06:56:46.208604] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.800 [2024-04-17 06:56:46.208616] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.800 [2024-04-17 06:56:46.208645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.800 qpair failed and we were unable to recover it. 00:30:41.800 [2024-04-17 06:56:46.218462] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.800 [2024-04-17 06:56:46.218600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.800 [2024-04-17 06:56:46.218626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.800 [2024-04-17 06:56:46.218641] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.800 [2024-04-17 06:56:46.218654] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.800 [2024-04-17 06:56:46.218683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.800 qpair failed and we were unable to recover it. 00:30:41.800 [2024-04-17 06:56:46.228477] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.800 [2024-04-17 06:56:46.228609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.800 [2024-04-17 06:56:46.228635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.800 [2024-04-17 06:56:46.228649] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.800 [2024-04-17 06:56:46.228662] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.800 [2024-04-17 06:56:46.228691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.800 qpair failed and we were unable to recover it. 00:30:41.800 [2024-04-17 06:56:46.238514] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.800 [2024-04-17 06:56:46.238648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.800 [2024-04-17 06:56:46.238674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.800 [2024-04-17 06:56:46.238689] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.800 [2024-04-17 06:56:46.238702] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.800 [2024-04-17 06:56:46.238730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.800 qpair failed and we were unable to recover it. 00:30:41.800 [2024-04-17 06:56:46.248544] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.800 [2024-04-17 06:56:46.248672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.801 [2024-04-17 06:56:46.248698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.801 [2024-04-17 06:56:46.248712] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.801 [2024-04-17 06:56:46.248725] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.801 [2024-04-17 06:56:46.248769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.801 qpair failed and we were unable to recover it. 00:30:41.801 [2024-04-17 06:56:46.258590] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.801 [2024-04-17 06:56:46.258724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.801 [2024-04-17 06:56:46.258750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.801 [2024-04-17 06:56:46.258764] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.801 [2024-04-17 06:56:46.258777] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.801 [2024-04-17 06:56:46.258806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.801 qpair failed and we were unable to recover it. 00:30:41.801 [2024-04-17 06:56:46.268604] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.801 [2024-04-17 06:56:46.268735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.801 [2024-04-17 06:56:46.268761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.801 [2024-04-17 06:56:46.268775] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.801 [2024-04-17 06:56:46.268792] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.801 [2024-04-17 06:56:46.268822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.801 qpair failed and we were unable to recover it. 00:30:41.801 [2024-04-17 06:56:46.278651] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.801 [2024-04-17 06:56:46.278784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.801 [2024-04-17 06:56:46.278809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.801 [2024-04-17 06:56:46.278823] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.801 [2024-04-17 06:56:46.278835] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.801 [2024-04-17 06:56:46.278864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.801 qpair failed and we were unable to recover it. 00:30:41.801 [2024-04-17 06:56:46.288655] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.801 [2024-04-17 06:56:46.288780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.801 [2024-04-17 06:56:46.288806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.801 [2024-04-17 06:56:46.288821] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.801 [2024-04-17 06:56:46.288834] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.801 [2024-04-17 06:56:46.288862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.801 qpair failed and we were unable to recover it. 00:30:41.801 [2024-04-17 06:56:46.298758] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.801 [2024-04-17 06:56:46.298893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.801 [2024-04-17 06:56:46.298917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.801 [2024-04-17 06:56:46.298931] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.801 [2024-04-17 06:56:46.298944] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.801 [2024-04-17 06:56:46.298973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.801 qpair failed and we were unable to recover it. 00:30:41.801 [2024-04-17 06:56:46.308724] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.801 [2024-04-17 06:56:46.308875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.801 [2024-04-17 06:56:46.308900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.801 [2024-04-17 06:56:46.308915] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.801 [2024-04-17 06:56:46.308928] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.801 [2024-04-17 06:56:46.308956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.801 qpair failed and we were unable to recover it. 00:30:41.801 [2024-04-17 06:56:46.318761] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.801 [2024-04-17 06:56:46.318932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.801 [2024-04-17 06:56:46.318957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.801 [2024-04-17 06:56:46.318972] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.801 [2024-04-17 06:56:46.318984] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.801 [2024-04-17 06:56:46.319029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.801 qpair failed and we were unable to recover it. 00:30:41.801 [2024-04-17 06:56:46.328787] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.801 [2024-04-17 06:56:46.328924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.801 [2024-04-17 06:56:46.328949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.801 [2024-04-17 06:56:46.328963] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.801 [2024-04-17 06:56:46.328975] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.801 [2024-04-17 06:56:46.329004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.801 qpair failed and we were unable to recover it. 00:30:41.801 [2024-04-17 06:56:46.338857] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.801 [2024-04-17 06:56:46.339048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.801 [2024-04-17 06:56:46.339076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.801 [2024-04-17 06:56:46.339092] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.801 [2024-04-17 06:56:46.339106] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.801 [2024-04-17 06:56:46.339136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.801 qpair failed and we were unable to recover it. 00:30:41.801 [2024-04-17 06:56:46.348853] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.801 [2024-04-17 06:56:46.348992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.801 [2024-04-17 06:56:46.349018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.801 [2024-04-17 06:56:46.349033] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.801 [2024-04-17 06:56:46.349046] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.801 [2024-04-17 06:56:46.349074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.801 qpair failed and we were unable to recover it. 00:30:41.801 [2024-04-17 06:56:46.358873] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.801 [2024-04-17 06:56:46.359005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.801 [2024-04-17 06:56:46.359031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.801 [2024-04-17 06:56:46.359052] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.801 [2024-04-17 06:56:46.359065] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.801 [2024-04-17 06:56:46.359094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.801 qpair failed and we were unable to recover it. 00:30:41.801 [2024-04-17 06:56:46.368891] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.801 [2024-04-17 06:56:46.369019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.801 [2024-04-17 06:56:46.369043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.801 [2024-04-17 06:56:46.369058] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.801 [2024-04-17 06:56:46.369071] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.801 [2024-04-17 06:56:46.369101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.801 qpair failed and we were unable to recover it. 00:30:41.801 [2024-04-17 06:56:46.378971] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.801 [2024-04-17 06:56:46.379108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.801 [2024-04-17 06:56:46.379134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.801 [2024-04-17 06:56:46.379148] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.801 [2024-04-17 06:56:46.379161] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.802 [2024-04-17 06:56:46.379198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.802 qpair failed and we were unable to recover it. 00:30:41.802 [2024-04-17 06:56:46.388976] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.802 [2024-04-17 06:56:46.389126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.802 [2024-04-17 06:56:46.389152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.802 [2024-04-17 06:56:46.389167] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.802 [2024-04-17 06:56:46.389186] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.802 [2024-04-17 06:56:46.389216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.802 qpair failed and we were unable to recover it. 00:30:41.802 [2024-04-17 06:56:46.398983] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.802 [2024-04-17 06:56:46.399111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.802 [2024-04-17 06:56:46.399137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.802 [2024-04-17 06:56:46.399151] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.802 [2024-04-17 06:56:46.399164] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:41.802 [2024-04-17 06:56:46.399199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.802 qpair failed and we were unable to recover it. 00:30:42.061 [2024-04-17 06:56:46.409043] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.061 [2024-04-17 06:56:46.409197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.061 [2024-04-17 06:56:46.409224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.061 [2024-04-17 06:56:46.409238] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.061 [2024-04-17 06:56:46.409251] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.061 [2024-04-17 06:56:46.409279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.061 qpair failed and we were unable to recover it. 00:30:42.061 [2024-04-17 06:56:46.419057] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.061 [2024-04-17 06:56:46.419201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.061 [2024-04-17 06:56:46.419226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.061 [2024-04-17 06:56:46.419241] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.061 [2024-04-17 06:56:46.419253] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.061 [2024-04-17 06:56:46.419282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.061 qpair failed and we were unable to recover it. 00:30:42.061 [2024-04-17 06:56:46.429086] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.061 [2024-04-17 06:56:46.429224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.061 [2024-04-17 06:56:46.429249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.061 [2024-04-17 06:56:46.429263] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.061 [2024-04-17 06:56:46.429276] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.061 [2024-04-17 06:56:46.429305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.061 qpair failed and we were unable to recover it. 00:30:42.061 [2024-04-17 06:56:46.439104] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.061 [2024-04-17 06:56:46.439242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.061 [2024-04-17 06:56:46.439268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.061 [2024-04-17 06:56:46.439282] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.061 [2024-04-17 06:56:46.439295] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.061 [2024-04-17 06:56:46.439323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.061 qpair failed and we were unable to recover it. 00:30:42.061 [2024-04-17 06:56:46.449123] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.061 [2024-04-17 06:56:46.449281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.061 [2024-04-17 06:56:46.449311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.061 [2024-04-17 06:56:46.449326] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.061 [2024-04-17 06:56:46.449339] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.061 [2024-04-17 06:56:46.449368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.061 qpair failed and we were unable to recover it. 00:30:42.061 [2024-04-17 06:56:46.459161] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.061 [2024-04-17 06:56:46.459299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.061 [2024-04-17 06:56:46.459324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.061 [2024-04-17 06:56:46.459339] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.061 [2024-04-17 06:56:46.459351] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.061 [2024-04-17 06:56:46.459379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.061 qpair failed and we were unable to recover it. 00:30:42.061 [2024-04-17 06:56:46.469203] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.061 [2024-04-17 06:56:46.469336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.061 [2024-04-17 06:56:46.469361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.061 [2024-04-17 06:56:46.469375] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.061 [2024-04-17 06:56:46.469388] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.061 [2024-04-17 06:56:46.469417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.061 qpair failed and we were unable to recover it. 00:30:42.061 [2024-04-17 06:56:46.479295] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.061 [2024-04-17 06:56:46.479426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.061 [2024-04-17 06:56:46.479451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.061 [2024-04-17 06:56:46.479465] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.061 [2024-04-17 06:56:46.479478] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.061 [2024-04-17 06:56:46.479506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.061 qpair failed and we were unable to recover it. 00:30:42.062 [2024-04-17 06:56:46.489231] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.062 [2024-04-17 06:56:46.489380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.062 [2024-04-17 06:56:46.489405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.062 [2024-04-17 06:56:46.489420] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.062 [2024-04-17 06:56:46.489433] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.062 [2024-04-17 06:56:46.489466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.062 qpair failed and we were unable to recover it. 00:30:42.062 [2024-04-17 06:56:46.499360] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.062 [2024-04-17 06:56:46.499496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.062 [2024-04-17 06:56:46.499521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.062 [2024-04-17 06:56:46.499536] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.062 [2024-04-17 06:56:46.499549] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.062 [2024-04-17 06:56:46.499577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.062 qpair failed and we were unable to recover it. 00:30:42.062 [2024-04-17 06:56:46.509297] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.062 [2024-04-17 06:56:46.509445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.062 [2024-04-17 06:56:46.509471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.062 [2024-04-17 06:56:46.509489] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.062 [2024-04-17 06:56:46.509502] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.062 [2024-04-17 06:56:46.509530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.062 qpair failed and we were unable to recover it. 00:30:42.062 [2024-04-17 06:56:46.519387] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.062 [2024-04-17 06:56:46.519516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.062 [2024-04-17 06:56:46.519541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.062 [2024-04-17 06:56:46.519557] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.062 [2024-04-17 06:56:46.519570] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.062 [2024-04-17 06:56:46.519598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.062 qpair failed and we were unable to recover it. 00:30:42.062 [2024-04-17 06:56:46.529411] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.062 [2024-04-17 06:56:46.529539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.062 [2024-04-17 06:56:46.529565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.062 [2024-04-17 06:56:46.529580] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.062 [2024-04-17 06:56:46.529592] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.062 [2024-04-17 06:56:46.529621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.062 qpair failed and we were unable to recover it. 00:30:42.062 [2024-04-17 06:56:46.539472] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.062 [2024-04-17 06:56:46.539613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.062 [2024-04-17 06:56:46.539642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.062 [2024-04-17 06:56:46.539657] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.062 [2024-04-17 06:56:46.539670] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.062 [2024-04-17 06:56:46.539698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.062 qpair failed and we were unable to recover it. 00:30:42.062 [2024-04-17 06:56:46.549387] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.062 [2024-04-17 06:56:46.549523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.062 [2024-04-17 06:56:46.549549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.062 [2024-04-17 06:56:46.549564] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.062 [2024-04-17 06:56:46.549576] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.062 [2024-04-17 06:56:46.549605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.062 qpair failed and we were unable to recover it. 00:30:42.062 [2024-04-17 06:56:46.559427] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.062 [2024-04-17 06:56:46.559558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.062 [2024-04-17 06:56:46.559584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.062 [2024-04-17 06:56:46.559598] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.062 [2024-04-17 06:56:46.559611] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.062 [2024-04-17 06:56:46.559639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.062 qpair failed and we were unable to recover it. 00:30:42.062 [2024-04-17 06:56:46.569483] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.062 [2024-04-17 06:56:46.569608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.062 [2024-04-17 06:56:46.569634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.062 [2024-04-17 06:56:46.569648] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.062 [2024-04-17 06:56:46.569661] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.062 [2024-04-17 06:56:46.569705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.062 qpair failed and we were unable to recover it. 00:30:42.062 [2024-04-17 06:56:46.579595] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.062 [2024-04-17 06:56:46.579735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.062 [2024-04-17 06:56:46.579762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.062 [2024-04-17 06:56:46.579781] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.062 [2024-04-17 06:56:46.579794] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.062 [2024-04-17 06:56:46.579829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.062 qpair failed and we were unable to recover it. 00:30:42.062 [2024-04-17 06:56:46.589651] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.062 [2024-04-17 06:56:46.589802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.062 [2024-04-17 06:56:46.589827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.062 [2024-04-17 06:56:46.589842] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.062 [2024-04-17 06:56:46.589855] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.062 [2024-04-17 06:56:46.589885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.062 qpair failed and we were unable to recover it. 00:30:42.062 [2024-04-17 06:56:46.599522] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.062 [2024-04-17 06:56:46.599716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.062 [2024-04-17 06:56:46.599758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.062 [2024-04-17 06:56:46.599774] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.062 [2024-04-17 06:56:46.599787] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.062 [2024-04-17 06:56:46.599815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.062 qpair failed and we were unable to recover it. 00:30:42.062 [2024-04-17 06:56:46.609631] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.062 [2024-04-17 06:56:46.609758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.062 [2024-04-17 06:56:46.609783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.062 [2024-04-17 06:56:46.609798] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.062 [2024-04-17 06:56:46.609810] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.062 [2024-04-17 06:56:46.609839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.062 qpair failed and we were unable to recover it. 00:30:42.062 [2024-04-17 06:56:46.619605] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.062 [2024-04-17 06:56:46.619789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.063 [2024-04-17 06:56:46.619813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.063 [2024-04-17 06:56:46.619827] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.063 [2024-04-17 06:56:46.619840] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.063 [2024-04-17 06:56:46.619869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.063 qpair failed and we were unable to recover it. 00:30:42.063 [2024-04-17 06:56:46.629603] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.063 [2024-04-17 06:56:46.629739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.063 [2024-04-17 06:56:46.629764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.063 [2024-04-17 06:56:46.629778] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.063 [2024-04-17 06:56:46.629791] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.063 [2024-04-17 06:56:46.629819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.063 qpair failed and we were unable to recover it. 00:30:42.063 [2024-04-17 06:56:46.639647] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.063 [2024-04-17 06:56:46.639784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.063 [2024-04-17 06:56:46.639808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.063 [2024-04-17 06:56:46.639823] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.063 [2024-04-17 06:56:46.639835] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.063 [2024-04-17 06:56:46.639864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.063 qpair failed and we were unable to recover it. 00:30:42.063 [2024-04-17 06:56:46.649655] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.063 [2024-04-17 06:56:46.649785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.063 [2024-04-17 06:56:46.649810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.063 [2024-04-17 06:56:46.649825] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.063 [2024-04-17 06:56:46.649837] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.063 [2024-04-17 06:56:46.649866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.063 qpair failed and we were unable to recover it. 00:30:42.063 [2024-04-17 06:56:46.659714] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.063 [2024-04-17 06:56:46.659863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.063 [2024-04-17 06:56:46.659888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.063 [2024-04-17 06:56:46.659903] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.063 [2024-04-17 06:56:46.659916] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.063 [2024-04-17 06:56:46.659945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.063 qpair failed and we were unable to recover it. 00:30:42.322 [2024-04-17 06:56:46.669735] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.322 [2024-04-17 06:56:46.669872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.322 [2024-04-17 06:56:46.669897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.322 [2024-04-17 06:56:46.669912] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.322 [2024-04-17 06:56:46.669929] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.322 [2024-04-17 06:56:46.669959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.322 qpair failed and we were unable to recover it. 00:30:42.322 [2024-04-17 06:56:46.679729] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.322 [2024-04-17 06:56:46.679861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.322 [2024-04-17 06:56:46.679887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.322 [2024-04-17 06:56:46.679901] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.322 [2024-04-17 06:56:46.679914] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.322 [2024-04-17 06:56:46.679943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.322 qpair failed and we were unable to recover it. 00:30:42.322 [2024-04-17 06:56:46.689766] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.322 [2024-04-17 06:56:46.689891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.322 [2024-04-17 06:56:46.689917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.322 [2024-04-17 06:56:46.689932] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.322 [2024-04-17 06:56:46.689944] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.322 [2024-04-17 06:56:46.689973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.322 qpair failed and we were unable to recover it. 00:30:42.322 [2024-04-17 06:56:46.699793] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.322 [2024-04-17 06:56:46.699937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.322 [2024-04-17 06:56:46.699963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.322 [2024-04-17 06:56:46.699977] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.322 [2024-04-17 06:56:46.699990] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.322 [2024-04-17 06:56:46.700018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.322 qpair failed and we were unable to recover it. 00:30:42.322 [2024-04-17 06:56:46.709843] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.322 [2024-04-17 06:56:46.709979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.322 [2024-04-17 06:56:46.710005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.322 [2024-04-17 06:56:46.710019] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.322 [2024-04-17 06:56:46.710032] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.322 [2024-04-17 06:56:46.710060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.322 qpair failed and we were unable to recover it. 00:30:42.322 [2024-04-17 06:56:46.719855] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.322 [2024-04-17 06:56:46.719988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.322 [2024-04-17 06:56:46.720013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.322 [2024-04-17 06:56:46.720028] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.322 [2024-04-17 06:56:46.720041] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.322 [2024-04-17 06:56:46.720069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.323 qpair failed and we were unable to recover it. 00:30:42.323 [2024-04-17 06:56:46.729871] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.323 [2024-04-17 06:56:46.730004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.323 [2024-04-17 06:56:46.730030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.323 [2024-04-17 06:56:46.730044] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.323 [2024-04-17 06:56:46.730057] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.323 [2024-04-17 06:56:46.730085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.323 qpair failed and we were unable to recover it. 00:30:42.323 [2024-04-17 06:56:46.739944] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.323 [2024-04-17 06:56:46.740110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.323 [2024-04-17 06:56:46.740136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.323 [2024-04-17 06:56:46.740151] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.323 [2024-04-17 06:56:46.740163] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.323 [2024-04-17 06:56:46.740199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.323 qpair failed and we were unable to recover it. 00:30:42.323 [2024-04-17 06:56:46.749943] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.323 [2024-04-17 06:56:46.750075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.323 [2024-04-17 06:56:46.750102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.323 [2024-04-17 06:56:46.750117] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.323 [2024-04-17 06:56:46.750129] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.323 [2024-04-17 06:56:46.750158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.323 qpair failed and we were unable to recover it. 00:30:42.323 [2024-04-17 06:56:46.759965] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.323 [2024-04-17 06:56:46.760088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.323 [2024-04-17 06:56:46.760115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.323 [2024-04-17 06:56:46.760135] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.323 [2024-04-17 06:56:46.760149] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.323 [2024-04-17 06:56:46.760184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.323 qpair failed and we were unable to recover it. 00:30:42.323 [2024-04-17 06:56:46.769998] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.323 [2024-04-17 06:56:46.770128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.323 [2024-04-17 06:56:46.770154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.323 [2024-04-17 06:56:46.770169] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.323 [2024-04-17 06:56:46.770190] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.323 [2024-04-17 06:56:46.770220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.323 qpair failed and we were unable to recover it. 00:30:42.323 [2024-04-17 06:56:46.780073] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.323 [2024-04-17 06:56:46.780250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.323 [2024-04-17 06:56:46.780277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.323 [2024-04-17 06:56:46.780292] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.323 [2024-04-17 06:56:46.780304] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.323 [2024-04-17 06:56:46.780334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.323 qpair failed and we were unable to recover it. 00:30:42.323 [2024-04-17 06:56:46.790084] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.323 [2024-04-17 06:56:46.790266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.323 [2024-04-17 06:56:46.790293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.323 [2024-04-17 06:56:46.790309] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.323 [2024-04-17 06:56:46.790321] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.323 [2024-04-17 06:56:46.790349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.323 qpair failed and we were unable to recover it. 00:30:42.323 [2024-04-17 06:56:46.800090] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.323 [2024-04-17 06:56:46.800262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.323 [2024-04-17 06:56:46.800288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.323 [2024-04-17 06:56:46.800303] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.323 [2024-04-17 06:56:46.800315] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.323 [2024-04-17 06:56:46.800344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.323 qpair failed and we were unable to recover it. 00:30:42.323 [2024-04-17 06:56:46.810107] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.323 [2024-04-17 06:56:46.810283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.323 [2024-04-17 06:56:46.810310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.323 [2024-04-17 06:56:46.810325] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.323 [2024-04-17 06:56:46.810337] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.323 [2024-04-17 06:56:46.810365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.323 qpair failed and we were unable to recover it. 00:30:42.323 [2024-04-17 06:56:46.820153] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.323 [2024-04-17 06:56:46.820298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.323 [2024-04-17 06:56:46.820325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.323 [2024-04-17 06:56:46.820339] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.323 [2024-04-17 06:56:46.820352] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.323 [2024-04-17 06:56:46.820381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.323 qpair failed and we were unable to recover it. 00:30:42.323 [2024-04-17 06:56:46.830201] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.323 [2024-04-17 06:56:46.830331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.323 [2024-04-17 06:56:46.830357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.323 [2024-04-17 06:56:46.830372] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.323 [2024-04-17 06:56:46.830385] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.323 [2024-04-17 06:56:46.830414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.323 qpair failed and we were unable to recover it. 00:30:42.323 [2024-04-17 06:56:46.840206] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.323 [2024-04-17 06:56:46.840334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.323 [2024-04-17 06:56:46.840360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.323 [2024-04-17 06:56:46.840375] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.323 [2024-04-17 06:56:46.840388] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.323 [2024-04-17 06:56:46.840417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.323 qpair failed and we were unable to recover it. 00:30:42.323 [2024-04-17 06:56:46.850229] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.323 [2024-04-17 06:56:46.850361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.323 [2024-04-17 06:56:46.850392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.323 [2024-04-17 06:56:46.850408] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.323 [2024-04-17 06:56:46.850421] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.323 [2024-04-17 06:56:46.850449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.323 qpair failed and we were unable to recover it. 00:30:42.323 [2024-04-17 06:56:46.860299] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.323 [2024-04-17 06:56:46.860437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.323 [2024-04-17 06:56:46.860464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.323 [2024-04-17 06:56:46.860479] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.323 [2024-04-17 06:56:46.860492] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.323 [2024-04-17 06:56:46.860521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.323 qpair failed and we were unable to recover it. 00:30:42.323 [2024-04-17 06:56:46.870293] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.323 [2024-04-17 06:56:46.870425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.323 [2024-04-17 06:56:46.870451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.323 [2024-04-17 06:56:46.870466] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.323 [2024-04-17 06:56:46.870478] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.323 [2024-04-17 06:56:46.870506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.323 qpair failed and we were unable to recover it. 00:30:42.323 [2024-04-17 06:56:46.880314] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.323 [2024-04-17 06:56:46.880455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.323 [2024-04-17 06:56:46.880481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.323 [2024-04-17 06:56:46.880497] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.324 [2024-04-17 06:56:46.880509] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.324 [2024-04-17 06:56:46.880538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.324 qpair failed and we were unable to recover it. 00:30:42.324 [2024-04-17 06:56:46.890356] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.324 [2024-04-17 06:56:46.890492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.324 [2024-04-17 06:56:46.890518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.324 [2024-04-17 06:56:46.890532] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.324 [2024-04-17 06:56:46.890545] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.324 [2024-04-17 06:56:46.890579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.324 qpair failed and we were unable to recover it. 00:30:42.324 [2024-04-17 06:56:46.900438] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.324 [2024-04-17 06:56:46.900599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.324 [2024-04-17 06:56:46.900625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.324 [2024-04-17 06:56:46.900640] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.324 [2024-04-17 06:56:46.900652] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.324 [2024-04-17 06:56:46.900681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.324 qpair failed and we were unable to recover it. 00:30:42.324 [2024-04-17 06:56:46.910461] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.324 [2024-04-17 06:56:46.910633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.324 [2024-04-17 06:56:46.910660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.324 [2024-04-17 06:56:46.910674] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.324 [2024-04-17 06:56:46.910687] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.324 [2024-04-17 06:56:46.910715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.324 qpair failed and we were unable to recover it. 00:30:42.324 [2024-04-17 06:56:46.920430] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.324 [2024-04-17 06:56:46.920557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.324 [2024-04-17 06:56:46.920584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.324 [2024-04-17 06:56:46.920598] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.324 [2024-04-17 06:56:46.920611] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.324 [2024-04-17 06:56:46.920639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.324 qpair failed and we were unable to recover it. 00:30:42.583 [2024-04-17 06:56:46.930477] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.583 [2024-04-17 06:56:46.930652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.583 [2024-04-17 06:56:46.930679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.583 [2024-04-17 06:56:46.930694] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.583 [2024-04-17 06:56:46.930707] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.583 [2024-04-17 06:56:46.930736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.583 qpair failed and we were unable to recover it. 00:30:42.583 [2024-04-17 06:56:46.940492] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.583 [2024-04-17 06:56:46.940627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.583 [2024-04-17 06:56:46.940660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.583 [2024-04-17 06:56:46.940676] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.583 [2024-04-17 06:56:46.940688] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.583 [2024-04-17 06:56:46.940717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.583 qpair failed and we were unable to recover it. 00:30:42.583 [2024-04-17 06:56:46.950551] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.583 [2024-04-17 06:56:46.950685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.583 [2024-04-17 06:56:46.950712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.583 [2024-04-17 06:56:46.950726] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.583 [2024-04-17 06:56:46.950739] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.583 [2024-04-17 06:56:46.950767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.583 qpair failed and we were unable to recover it. 00:30:42.583 [2024-04-17 06:56:46.960535] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.583 [2024-04-17 06:56:46.960672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.583 [2024-04-17 06:56:46.960699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.583 [2024-04-17 06:56:46.960713] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.583 [2024-04-17 06:56:46.960725] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.583 [2024-04-17 06:56:46.960754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.583 qpair failed and we were unable to recover it. 00:30:42.583 [2024-04-17 06:56:46.970570] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.583 [2024-04-17 06:56:46.970720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.583 [2024-04-17 06:56:46.970746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.583 [2024-04-17 06:56:46.970761] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.583 [2024-04-17 06:56:46.970773] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.583 [2024-04-17 06:56:46.970801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.583 qpair failed and we were unable to recover it. 00:30:42.584 [2024-04-17 06:56:46.980622] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.584 [2024-04-17 06:56:46.980759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.584 [2024-04-17 06:56:46.980785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.584 [2024-04-17 06:56:46.980800] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.584 [2024-04-17 06:56:46.980813] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.584 [2024-04-17 06:56:46.980847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.584 qpair failed and we were unable to recover it. 00:30:42.584 [2024-04-17 06:56:46.990664] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.584 [2024-04-17 06:56:46.990844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.584 [2024-04-17 06:56:46.990870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.584 [2024-04-17 06:56:46.990885] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.584 [2024-04-17 06:56:46.990897] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.584 [2024-04-17 06:56:46.990926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.584 qpair failed and we were unable to recover it. 00:30:42.584 [2024-04-17 06:56:47.000715] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.584 [2024-04-17 06:56:47.000854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.584 [2024-04-17 06:56:47.000880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.584 [2024-04-17 06:56:47.000895] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.584 [2024-04-17 06:56:47.000907] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.584 [2024-04-17 06:56:47.000950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.584 qpair failed and we were unable to recover it. 00:30:42.584 [2024-04-17 06:56:47.010681] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.584 [2024-04-17 06:56:47.010811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.584 [2024-04-17 06:56:47.010838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.584 [2024-04-17 06:56:47.010853] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.584 [2024-04-17 06:56:47.010865] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.584 [2024-04-17 06:56:47.010894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.584 qpair failed and we were unable to recover it. 00:30:42.584 [2024-04-17 06:56:47.020768] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.584 [2024-04-17 06:56:47.020923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.584 [2024-04-17 06:56:47.020948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.584 [2024-04-17 06:56:47.020963] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.584 [2024-04-17 06:56:47.020976] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.584 [2024-04-17 06:56:47.021005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.584 qpair failed and we were unable to recover it. 00:30:42.584 [2024-04-17 06:56:47.030772] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.584 [2024-04-17 06:56:47.030933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.584 [2024-04-17 06:56:47.030960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.584 [2024-04-17 06:56:47.030974] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.584 [2024-04-17 06:56:47.030987] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.584 [2024-04-17 06:56:47.031016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.584 qpair failed and we were unable to recover it. 00:30:42.584 [2024-04-17 06:56:47.040770] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.584 [2024-04-17 06:56:47.040901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.584 [2024-04-17 06:56:47.040925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.584 [2024-04-17 06:56:47.040940] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.584 [2024-04-17 06:56:47.040952] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.584 [2024-04-17 06:56:47.040982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.584 qpair failed and we were unable to recover it. 00:30:42.584 [2024-04-17 06:56:47.050827] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.584 [2024-04-17 06:56:47.050955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.584 [2024-04-17 06:56:47.050981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.584 [2024-04-17 06:56:47.050996] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.584 [2024-04-17 06:56:47.051009] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.584 [2024-04-17 06:56:47.051037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.584 qpair failed and we were unable to recover it. 00:30:42.584 [2024-04-17 06:56:47.060861] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.584 [2024-04-17 06:56:47.061048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.584 [2024-04-17 06:56:47.061074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.584 [2024-04-17 06:56:47.061088] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.584 [2024-04-17 06:56:47.061101] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.584 [2024-04-17 06:56:47.061129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.584 qpair failed and we were unable to recover it. 00:30:42.584 [2024-04-17 06:56:47.070888] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.584 [2024-04-17 06:56:47.071079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.584 [2024-04-17 06:56:47.071106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.584 [2024-04-17 06:56:47.071121] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.584 [2024-04-17 06:56:47.071138] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.584 [2024-04-17 06:56:47.071168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.584 qpair failed and we were unable to recover it. 00:30:42.584 [2024-04-17 06:56:47.080891] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.584 [2024-04-17 06:56:47.081026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.584 [2024-04-17 06:56:47.081052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.584 [2024-04-17 06:56:47.081067] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.584 [2024-04-17 06:56:47.081079] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.584 [2024-04-17 06:56:47.081107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.584 qpair failed and we were unable to recover it. 00:30:42.584 [2024-04-17 06:56:47.090927] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.584 [2024-04-17 06:56:47.091080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.584 [2024-04-17 06:56:47.091106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.584 [2024-04-17 06:56:47.091121] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.584 [2024-04-17 06:56:47.091133] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.584 [2024-04-17 06:56:47.091184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.584 qpair failed and we were unable to recover it. 00:30:42.584 [2024-04-17 06:56:47.101012] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.584 [2024-04-17 06:56:47.101155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.584 [2024-04-17 06:56:47.101189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.584 [2024-04-17 06:56:47.101212] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.584 [2024-04-17 06:56:47.101228] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.584 [2024-04-17 06:56:47.101258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.584 qpair failed and we were unable to recover it. 00:30:42.584 [2024-04-17 06:56:47.111006] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.584 [2024-04-17 06:56:47.111150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.584 [2024-04-17 06:56:47.111183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.585 [2024-04-17 06:56:47.111200] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.585 [2024-04-17 06:56:47.111212] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.585 [2024-04-17 06:56:47.111242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.585 qpair failed and we were unable to recover it. 00:30:42.585 [2024-04-17 06:56:47.121005] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.585 [2024-04-17 06:56:47.121150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.585 [2024-04-17 06:56:47.121182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.585 [2024-04-17 06:56:47.121199] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.585 [2024-04-17 06:56:47.121211] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.585 [2024-04-17 06:56:47.121241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.585 qpair failed and we were unable to recover it. 00:30:42.585 [2024-04-17 06:56:47.131030] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.585 [2024-04-17 06:56:47.131168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.585 [2024-04-17 06:56:47.131200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.585 [2024-04-17 06:56:47.131216] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.585 [2024-04-17 06:56:47.131228] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.585 [2024-04-17 06:56:47.131257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.585 qpair failed and we were unable to recover it. 00:30:42.585 [2024-04-17 06:56:47.141101] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.585 [2024-04-17 06:56:47.141249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.585 [2024-04-17 06:56:47.141274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.585 [2024-04-17 06:56:47.141289] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.585 [2024-04-17 06:56:47.141302] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.585 [2024-04-17 06:56:47.141331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.585 qpair failed and we were unable to recover it. 00:30:42.585 [2024-04-17 06:56:47.151096] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.585 [2024-04-17 06:56:47.151278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.585 [2024-04-17 06:56:47.151305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.585 [2024-04-17 06:56:47.151320] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.585 [2024-04-17 06:56:47.151333] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.585 [2024-04-17 06:56:47.151362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.585 qpair failed and we were unable to recover it. 00:30:42.585 [2024-04-17 06:56:47.161104] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.585 [2024-04-17 06:56:47.161242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.585 [2024-04-17 06:56:47.161268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.585 [2024-04-17 06:56:47.161288] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.585 [2024-04-17 06:56:47.161300] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.585 [2024-04-17 06:56:47.161330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.585 qpair failed and we were unable to recover it. 00:30:42.585 [2024-04-17 06:56:47.171137] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.585 [2024-04-17 06:56:47.171316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.585 [2024-04-17 06:56:47.171343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.585 [2024-04-17 06:56:47.171358] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.585 [2024-04-17 06:56:47.171370] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.585 [2024-04-17 06:56:47.171399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.585 qpair failed and we were unable to recover it. 00:30:42.585 [2024-04-17 06:56:47.181187] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.585 [2024-04-17 06:56:47.181359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.585 [2024-04-17 06:56:47.181385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.585 [2024-04-17 06:56:47.181399] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.585 [2024-04-17 06:56:47.181411] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.585 [2024-04-17 06:56:47.181439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.585 qpair failed and we were unable to recover it. 00:30:42.844 [2024-04-17 06:56:47.191201] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.844 [2024-04-17 06:56:47.191343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.844 [2024-04-17 06:56:47.191369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.844 [2024-04-17 06:56:47.191383] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.844 [2024-04-17 06:56:47.191396] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.844 [2024-04-17 06:56:47.191424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.844 qpair failed and we were unable to recover it. 00:30:42.844 [2024-04-17 06:56:47.201221] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.844 [2024-04-17 06:56:47.201353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.844 [2024-04-17 06:56:47.201379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.844 [2024-04-17 06:56:47.201394] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.844 [2024-04-17 06:56:47.201406] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.844 [2024-04-17 06:56:47.201435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.844 qpair failed and we were unable to recover it. 00:30:42.844 [2024-04-17 06:56:47.211255] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.844 [2024-04-17 06:56:47.211393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.844 [2024-04-17 06:56:47.211419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.844 [2024-04-17 06:56:47.211433] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.844 [2024-04-17 06:56:47.211446] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.844 [2024-04-17 06:56:47.211474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.844 qpair failed and we were unable to recover it. 00:30:42.844 [2024-04-17 06:56:47.221290] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.844 [2024-04-17 06:56:47.221432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.844 [2024-04-17 06:56:47.221457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.844 [2024-04-17 06:56:47.221472] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.844 [2024-04-17 06:56:47.221485] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.844 [2024-04-17 06:56:47.221515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.844 qpair failed and we were unable to recover it. 00:30:42.844 [2024-04-17 06:56:47.231318] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.844 [2024-04-17 06:56:47.231491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.844 [2024-04-17 06:56:47.231517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.844 [2024-04-17 06:56:47.231532] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.844 [2024-04-17 06:56:47.231544] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.844 [2024-04-17 06:56:47.231572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.844 qpair failed and we were unable to recover it. 00:30:42.844 [2024-04-17 06:56:47.241321] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.844 [2024-04-17 06:56:47.241469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.844 [2024-04-17 06:56:47.241494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.844 [2024-04-17 06:56:47.241508] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.845 [2024-04-17 06:56:47.241520] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.845 [2024-04-17 06:56:47.241549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.845 qpair failed and we were unable to recover it. 00:30:42.845 [2024-04-17 06:56:47.251381] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.845 [2024-04-17 06:56:47.251516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.845 [2024-04-17 06:56:47.251542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.845 [2024-04-17 06:56:47.251562] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.845 [2024-04-17 06:56:47.251576] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.845 [2024-04-17 06:56:47.251620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.845 qpair failed and we were unable to recover it. 00:30:42.845 [2024-04-17 06:56:47.261417] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.845 [2024-04-17 06:56:47.261559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.845 [2024-04-17 06:56:47.261584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.845 [2024-04-17 06:56:47.261599] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.845 [2024-04-17 06:56:47.261611] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.845 [2024-04-17 06:56:47.261640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.845 qpair failed and we were unable to recover it. 00:30:42.845 [2024-04-17 06:56:47.271413] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.845 [2024-04-17 06:56:47.271544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.845 [2024-04-17 06:56:47.271571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.845 [2024-04-17 06:56:47.271586] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.845 [2024-04-17 06:56:47.271598] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.845 [2024-04-17 06:56:47.271627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.845 qpair failed and we were unable to recover it. 00:30:42.845 [2024-04-17 06:56:47.281435] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.845 [2024-04-17 06:56:47.281565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.845 [2024-04-17 06:56:47.281591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.845 [2024-04-17 06:56:47.281606] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.845 [2024-04-17 06:56:47.281618] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.845 [2024-04-17 06:56:47.281647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.845 qpair failed and we were unable to recover it. 00:30:42.845 [2024-04-17 06:56:47.291480] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.845 [2024-04-17 06:56:47.291608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.845 [2024-04-17 06:56:47.291632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.845 [2024-04-17 06:56:47.291646] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.845 [2024-04-17 06:56:47.291658] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.845 [2024-04-17 06:56:47.291686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.845 qpair failed and we were unable to recover it. 00:30:42.845 [2024-04-17 06:56:47.301531] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.845 [2024-04-17 06:56:47.301670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.845 [2024-04-17 06:56:47.301695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.845 [2024-04-17 06:56:47.301710] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.845 [2024-04-17 06:56:47.301723] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.845 [2024-04-17 06:56:47.301752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.845 qpair failed and we were unable to recover it. 00:30:42.845 [2024-04-17 06:56:47.311519] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.845 [2024-04-17 06:56:47.311662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.845 [2024-04-17 06:56:47.311688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.845 [2024-04-17 06:56:47.311703] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.845 [2024-04-17 06:56:47.311716] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.845 [2024-04-17 06:56:47.311746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.845 qpair failed and we were unable to recover it. 00:30:42.845 [2024-04-17 06:56:47.321564] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.845 [2024-04-17 06:56:47.321713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.845 [2024-04-17 06:56:47.321740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.845 [2024-04-17 06:56:47.321755] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.845 [2024-04-17 06:56:47.321767] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.845 [2024-04-17 06:56:47.321797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.845 qpair failed and we were unable to recover it. 00:30:42.845 [2024-04-17 06:56:47.331634] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.845 [2024-04-17 06:56:47.331822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.845 [2024-04-17 06:56:47.331848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.845 [2024-04-17 06:56:47.331877] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.845 [2024-04-17 06:56:47.331889] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.845 [2024-04-17 06:56:47.331918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.845 qpair failed and we were unable to recover it. 00:30:42.845 [2024-04-17 06:56:47.341641] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.845 [2024-04-17 06:56:47.341813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.845 [2024-04-17 06:56:47.341844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.845 [2024-04-17 06:56:47.341861] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.845 [2024-04-17 06:56:47.341873] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.845 [2024-04-17 06:56:47.341902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.845 qpair failed and we were unable to recover it. 00:30:42.845 [2024-04-17 06:56:47.351675] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.845 [2024-04-17 06:56:47.351815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.845 [2024-04-17 06:56:47.351841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.845 [2024-04-17 06:56:47.351856] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.845 [2024-04-17 06:56:47.351868] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.845 [2024-04-17 06:56:47.351898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.845 qpair failed and we were unable to recover it. 00:30:42.845 [2024-04-17 06:56:47.361704] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.845 [2024-04-17 06:56:47.361849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.845 [2024-04-17 06:56:47.361875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.845 [2024-04-17 06:56:47.361891] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.845 [2024-04-17 06:56:47.361904] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.845 [2024-04-17 06:56:47.361947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.845 qpair failed and we were unable to recover it. 00:30:42.845 [2024-04-17 06:56:47.371736] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.845 [2024-04-17 06:56:47.371876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.845 [2024-04-17 06:56:47.371902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.845 [2024-04-17 06:56:47.371918] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.845 [2024-04-17 06:56:47.371931] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.845 [2024-04-17 06:56:47.371960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.845 qpair failed and we were unable to recover it. 00:30:42.846 [2024-04-17 06:56:47.381750] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.846 [2024-04-17 06:56:47.381893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.846 [2024-04-17 06:56:47.381920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.846 [2024-04-17 06:56:47.381935] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.846 [2024-04-17 06:56:47.381948] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.846 [2024-04-17 06:56:47.381983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.846 qpair failed and we were unable to recover it. 00:30:42.846 [2024-04-17 06:56:47.391786] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.846 [2024-04-17 06:56:47.391933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.846 [2024-04-17 06:56:47.391960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.846 [2024-04-17 06:56:47.391979] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.846 [2024-04-17 06:56:47.391991] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.846 [2024-04-17 06:56:47.392020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.846 qpair failed and we were unable to recover it. 00:30:42.846 [2024-04-17 06:56:47.401812] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.846 [2024-04-17 06:56:47.401965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.846 [2024-04-17 06:56:47.401992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.846 [2024-04-17 06:56:47.402007] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.846 [2024-04-17 06:56:47.402034] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.846 [2024-04-17 06:56:47.402063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.846 qpair failed and we were unable to recover it. 00:30:42.846 [2024-04-17 06:56:47.411821] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.846 [2024-04-17 06:56:47.411968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.846 [2024-04-17 06:56:47.411994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.846 [2024-04-17 06:56:47.412009] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.846 [2024-04-17 06:56:47.412022] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.846 [2024-04-17 06:56:47.412065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.846 qpair failed and we were unable to recover it. 00:30:42.846 [2024-04-17 06:56:47.421893] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.846 [2024-04-17 06:56:47.422039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.846 [2024-04-17 06:56:47.422065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.846 [2024-04-17 06:56:47.422080] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.846 [2024-04-17 06:56:47.422092] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.846 [2024-04-17 06:56:47.422121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.846 qpair failed and we were unable to recover it. 00:30:42.846 [2024-04-17 06:56:47.431905] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.846 [2024-04-17 06:56:47.432080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.846 [2024-04-17 06:56:47.432115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.846 [2024-04-17 06:56:47.432133] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.846 [2024-04-17 06:56:47.432146] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.846 [2024-04-17 06:56:47.432183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.846 qpair failed and we were unable to recover it. 00:30:42.846 [2024-04-17 06:56:47.441919] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.846 [2024-04-17 06:56:47.442080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.846 [2024-04-17 06:56:47.442107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.846 [2024-04-17 06:56:47.442122] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.846 [2024-04-17 06:56:47.442134] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:42.846 [2024-04-17 06:56:47.442173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.846 qpair failed and we were unable to recover it. 00:30:43.105 [2024-04-17 06:56:47.451935] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.105 [2024-04-17 06:56:47.452070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.105 [2024-04-17 06:56:47.452097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.105 [2024-04-17 06:56:47.452112] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.105 [2024-04-17 06:56:47.452124] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.105 [2024-04-17 06:56:47.452153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.105 qpair failed and we were unable to recover it. 00:30:43.105 [2024-04-17 06:56:47.461963] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.105 [2024-04-17 06:56:47.462125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.105 [2024-04-17 06:56:47.462152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.105 [2024-04-17 06:56:47.462167] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.105 [2024-04-17 06:56:47.462188] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.105 [2024-04-17 06:56:47.462218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.105 qpair failed and we were unable to recover it. 00:30:43.105 [2024-04-17 06:56:47.471997] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.105 [2024-04-17 06:56:47.472138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.105 [2024-04-17 06:56:47.472164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.105 [2024-04-17 06:56:47.472186] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.105 [2024-04-17 06:56:47.472205] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.105 [2024-04-17 06:56:47.472235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.105 qpair failed and we were unable to recover it. 00:30:43.105 [2024-04-17 06:56:47.482011] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.105 [2024-04-17 06:56:47.482160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.105 [2024-04-17 06:56:47.482196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.105 [2024-04-17 06:56:47.482212] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.105 [2024-04-17 06:56:47.482225] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.105 [2024-04-17 06:56:47.482254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.105 qpair failed and we were unable to recover it. 00:30:43.105 [2024-04-17 06:56:47.492054] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.105 [2024-04-17 06:56:47.492231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.105 [2024-04-17 06:56:47.492258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.105 [2024-04-17 06:56:47.492273] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.105 [2024-04-17 06:56:47.492285] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.105 [2024-04-17 06:56:47.492314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.105 qpair failed and we were unable to recover it. 00:30:43.105 [2024-04-17 06:56:47.502111] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.105 [2024-04-17 06:56:47.502251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.105 [2024-04-17 06:56:47.502277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.106 [2024-04-17 06:56:47.502292] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.106 [2024-04-17 06:56:47.502304] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.106 [2024-04-17 06:56:47.502334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.106 qpair failed and we were unable to recover it. 00:30:43.106 [2024-04-17 06:56:47.512090] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.106 [2024-04-17 06:56:47.512232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.106 [2024-04-17 06:56:47.512259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.106 [2024-04-17 06:56:47.512274] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.106 [2024-04-17 06:56:47.512286] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.106 [2024-04-17 06:56:47.512315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.106 qpair failed and we were unable to recover it. 00:30:43.106 [2024-04-17 06:56:47.522138] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.106 [2024-04-17 06:56:47.522300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.106 [2024-04-17 06:56:47.522326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.106 [2024-04-17 06:56:47.522341] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.106 [2024-04-17 06:56:47.522354] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.106 [2024-04-17 06:56:47.522383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.106 qpair failed and we were unable to recover it. 00:30:43.106 [2024-04-17 06:56:47.532195] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.106 [2024-04-17 06:56:47.532372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.106 [2024-04-17 06:56:47.532399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.106 [2024-04-17 06:56:47.532414] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.106 [2024-04-17 06:56:47.532426] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.106 [2024-04-17 06:56:47.532455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.106 qpair failed and we were unable to recover it. 00:30:43.106 [2024-04-17 06:56:47.542207] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.106 [2024-04-17 06:56:47.542349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.106 [2024-04-17 06:56:47.542376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.106 [2024-04-17 06:56:47.542391] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.106 [2024-04-17 06:56:47.542404] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.106 [2024-04-17 06:56:47.542433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.106 qpair failed and we were unable to recover it. 00:30:43.106 [2024-04-17 06:56:47.552253] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.106 [2024-04-17 06:56:47.552430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.106 [2024-04-17 06:56:47.552467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.106 [2024-04-17 06:56:47.552482] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.106 [2024-04-17 06:56:47.552494] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.106 [2024-04-17 06:56:47.552522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.106 qpair failed and we were unable to recover it. 00:30:43.106 [2024-04-17 06:56:47.562236] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.106 [2024-04-17 06:56:47.562384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.106 [2024-04-17 06:56:47.562411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.106 [2024-04-17 06:56:47.562431] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.106 [2024-04-17 06:56:47.562444] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.106 [2024-04-17 06:56:47.562477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.106 qpair failed and we were unable to recover it. 00:30:43.106 [2024-04-17 06:56:47.572258] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.106 [2024-04-17 06:56:47.572388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.106 [2024-04-17 06:56:47.572415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.106 [2024-04-17 06:56:47.572430] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.106 [2024-04-17 06:56:47.572442] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.106 [2024-04-17 06:56:47.572471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.106 qpair failed and we were unable to recover it. 00:30:43.106 [2024-04-17 06:56:47.582341] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.106 [2024-04-17 06:56:47.582483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.106 [2024-04-17 06:56:47.582508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.106 [2024-04-17 06:56:47.582523] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.106 [2024-04-17 06:56:47.582535] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.106 [2024-04-17 06:56:47.582564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.106 qpair failed and we were unable to recover it. 00:30:43.106 [2024-04-17 06:56:47.592353] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.106 [2024-04-17 06:56:47.592487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.106 [2024-04-17 06:56:47.592512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.106 [2024-04-17 06:56:47.592526] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.106 [2024-04-17 06:56:47.592539] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.106 [2024-04-17 06:56:47.592568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.106 qpair failed and we were unable to recover it. 00:30:43.106 [2024-04-17 06:56:47.602353] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.106 [2024-04-17 06:56:47.602498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.106 [2024-04-17 06:56:47.602525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.106 [2024-04-17 06:56:47.602540] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.106 [2024-04-17 06:56:47.602552] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.106 [2024-04-17 06:56:47.602581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.106 qpair failed and we were unable to recover it. 00:30:43.106 [2024-04-17 06:56:47.612369] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.106 [2024-04-17 06:56:47.612508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.106 [2024-04-17 06:56:47.612534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.106 [2024-04-17 06:56:47.612549] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.106 [2024-04-17 06:56:47.612562] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.106 [2024-04-17 06:56:47.612590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.106 qpair failed and we were unable to recover it. 00:30:43.106 [2024-04-17 06:56:47.622442] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.106 [2024-04-17 06:56:47.622576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.106 [2024-04-17 06:56:47.622600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.106 [2024-04-17 06:56:47.622614] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.106 [2024-04-17 06:56:47.622626] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.106 [2024-04-17 06:56:47.622655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.106 qpair failed and we were unable to recover it. 00:30:43.106 [2024-04-17 06:56:47.632437] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.106 [2024-04-17 06:56:47.632574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.106 [2024-04-17 06:56:47.632600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.106 [2024-04-17 06:56:47.632615] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.106 [2024-04-17 06:56:47.632627] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.107 [2024-04-17 06:56:47.632656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.107 qpair failed and we were unable to recover it. 00:30:43.107 [2024-04-17 06:56:47.642465] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.107 [2024-04-17 06:56:47.642604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.107 [2024-04-17 06:56:47.642631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.107 [2024-04-17 06:56:47.642646] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.107 [2024-04-17 06:56:47.642659] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.107 [2024-04-17 06:56:47.642687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.107 qpair failed and we were unable to recover it. 00:30:43.107 [2024-04-17 06:56:47.652483] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.107 [2024-04-17 06:56:47.652620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.107 [2024-04-17 06:56:47.652647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.107 [2024-04-17 06:56:47.652667] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.107 [2024-04-17 06:56:47.652681] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.107 [2024-04-17 06:56:47.652724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.107 qpair failed and we were unable to recover it. 00:30:43.107 [2024-04-17 06:56:47.662541] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.107 [2024-04-17 06:56:47.662685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.107 [2024-04-17 06:56:47.662711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.107 [2024-04-17 06:56:47.662726] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.107 [2024-04-17 06:56:47.662739] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.107 [2024-04-17 06:56:47.662768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.107 qpair failed and we were unable to recover it. 00:30:43.107 [2024-04-17 06:56:47.672553] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.107 [2024-04-17 06:56:47.672739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.107 [2024-04-17 06:56:47.672765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.107 [2024-04-17 06:56:47.672780] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.107 [2024-04-17 06:56:47.672792] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.107 [2024-04-17 06:56:47.672820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.107 qpair failed and we were unable to recover it. 00:30:43.107 [2024-04-17 06:56:47.682571] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.107 [2024-04-17 06:56:47.682704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.107 [2024-04-17 06:56:47.682730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.107 [2024-04-17 06:56:47.682745] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.107 [2024-04-17 06:56:47.682758] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.107 [2024-04-17 06:56:47.682786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.107 qpair failed and we were unable to recover it. 00:30:43.107 [2024-04-17 06:56:47.692583] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.107 [2024-04-17 06:56:47.692712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.107 [2024-04-17 06:56:47.692739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.107 [2024-04-17 06:56:47.692754] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.107 [2024-04-17 06:56:47.692766] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.107 [2024-04-17 06:56:47.692795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.107 qpair failed and we were unable to recover it. 00:30:43.107 [2024-04-17 06:56:47.702623] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.107 [2024-04-17 06:56:47.702766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.107 [2024-04-17 06:56:47.702792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.107 [2024-04-17 06:56:47.702808] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.107 [2024-04-17 06:56:47.702820] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.107 [2024-04-17 06:56:47.702849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.107 qpair failed and we were unable to recover it. 00:30:43.366 [2024-04-17 06:56:47.712646] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.366 [2024-04-17 06:56:47.712786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.366 [2024-04-17 06:56:47.712812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.366 [2024-04-17 06:56:47.712827] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.366 [2024-04-17 06:56:47.712839] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.366 [2024-04-17 06:56:47.712868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.366 qpair failed and we were unable to recover it. 00:30:43.366 [2024-04-17 06:56:47.722680] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.366 [2024-04-17 06:56:47.722849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.366 [2024-04-17 06:56:47.722875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.366 [2024-04-17 06:56:47.722890] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.366 [2024-04-17 06:56:47.722902] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.366 [2024-04-17 06:56:47.722931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.366 qpair failed and we were unable to recover it. 00:30:43.366 [2024-04-17 06:56:47.732689] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.366 [2024-04-17 06:56:47.732814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.366 [2024-04-17 06:56:47.732840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.366 [2024-04-17 06:56:47.732855] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.366 [2024-04-17 06:56:47.732867] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.366 [2024-04-17 06:56:47.732896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.366 qpair failed and we were unable to recover it. 00:30:43.366 [2024-04-17 06:56:47.742780] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.366 [2024-04-17 06:56:47.742957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.366 [2024-04-17 06:56:47.742990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.366 [2024-04-17 06:56:47.743006] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.366 [2024-04-17 06:56:47.743019] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.366 [2024-04-17 06:56:47.743048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.366 qpair failed and we were unable to recover it. 00:30:43.366 [2024-04-17 06:56:47.752785] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.367 [2024-04-17 06:56:47.752922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.367 [2024-04-17 06:56:47.752948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.367 [2024-04-17 06:56:47.752965] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.367 [2024-04-17 06:56:47.752978] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.367 [2024-04-17 06:56:47.753007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.367 qpair failed and we were unable to recover it. 00:30:43.367 [2024-04-17 06:56:47.762786] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.367 [2024-04-17 06:56:47.762933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.367 [2024-04-17 06:56:47.762958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.367 [2024-04-17 06:56:47.762972] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.367 [2024-04-17 06:56:47.762985] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.367 [2024-04-17 06:56:47.763013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.367 qpair failed and we were unable to recover it. 00:30:43.367 [2024-04-17 06:56:47.772820] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.367 [2024-04-17 06:56:47.772971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.367 [2024-04-17 06:56:47.772999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.367 [2024-04-17 06:56:47.773014] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.367 [2024-04-17 06:56:47.773026] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.367 [2024-04-17 06:56:47.773055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.367 qpair failed and we were unable to recover it. 00:30:43.367 [2024-04-17 06:56:47.782890] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.367 [2024-04-17 06:56:47.783024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.367 [2024-04-17 06:56:47.783050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.367 [2024-04-17 06:56:47.783065] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.367 [2024-04-17 06:56:47.783078] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.367 [2024-04-17 06:56:47.783112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.367 qpair failed and we were unable to recover it. 00:30:43.367 [2024-04-17 06:56:47.792905] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.367 [2024-04-17 06:56:47.793097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.367 [2024-04-17 06:56:47.793123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.367 [2024-04-17 06:56:47.793138] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.367 [2024-04-17 06:56:47.793152] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.367 [2024-04-17 06:56:47.793188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.367 qpair failed and we were unable to recover it. 00:30:43.367 [2024-04-17 06:56:47.803054] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.367 [2024-04-17 06:56:47.803205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.367 [2024-04-17 06:56:47.803231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.367 [2024-04-17 06:56:47.803245] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.367 [2024-04-17 06:56:47.803258] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.367 [2024-04-17 06:56:47.803286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.367 qpair failed and we were unable to recover it. 00:30:43.367 [2024-04-17 06:56:47.812972] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.367 [2024-04-17 06:56:47.813129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.367 [2024-04-17 06:56:47.813154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.367 [2024-04-17 06:56:47.813168] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.367 [2024-04-17 06:56:47.813188] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.367 [2024-04-17 06:56:47.813218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.367 qpair failed and we were unable to recover it. 00:30:43.367 [2024-04-17 06:56:47.822996] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.367 [2024-04-17 06:56:47.823142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.367 [2024-04-17 06:56:47.823166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.367 [2024-04-17 06:56:47.823187] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.367 [2024-04-17 06:56:47.823201] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.367 [2024-04-17 06:56:47.823230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.367 qpair failed and we were unable to recover it. 00:30:43.367 [2024-04-17 06:56:47.833033] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.367 [2024-04-17 06:56:47.833180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.367 [2024-04-17 06:56:47.833211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.367 [2024-04-17 06:56:47.833226] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.367 [2024-04-17 06:56:47.833239] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.367 [2024-04-17 06:56:47.833268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.367 qpair failed and we were unable to recover it. 00:30:43.367 [2024-04-17 06:56:47.843053] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.367 [2024-04-17 06:56:47.843197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.367 [2024-04-17 06:56:47.843223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.367 [2024-04-17 06:56:47.843237] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.367 [2024-04-17 06:56:47.843250] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.367 [2024-04-17 06:56:47.843279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.367 qpair failed and we were unable to recover it. 00:30:43.367 [2024-04-17 06:56:47.853029] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.367 [2024-04-17 06:56:47.853158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.367 [2024-04-17 06:56:47.853191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.367 [2024-04-17 06:56:47.853206] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.367 [2024-04-17 06:56:47.853219] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.367 [2024-04-17 06:56:47.853248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.367 qpair failed and we were unable to recover it. 00:30:43.367 [2024-04-17 06:56:47.863064] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.367 [2024-04-17 06:56:47.863203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.367 [2024-04-17 06:56:47.863229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.367 [2024-04-17 06:56:47.863244] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.367 [2024-04-17 06:56:47.863257] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.367 [2024-04-17 06:56:47.863286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.367 qpair failed and we were unable to recover it. 00:30:43.367 [2024-04-17 06:56:47.873094] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.367 [2024-04-17 06:56:47.873241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.367 [2024-04-17 06:56:47.873267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.367 [2024-04-17 06:56:47.873282] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.368 [2024-04-17 06:56:47.873299] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.368 [2024-04-17 06:56:47.873329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.368 qpair failed and we were unable to recover it. 00:30:43.368 [2024-04-17 06:56:47.883126] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.368 [2024-04-17 06:56:47.883267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.368 [2024-04-17 06:56:47.883294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.368 [2024-04-17 06:56:47.883309] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.368 [2024-04-17 06:56:47.883322] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.368 [2024-04-17 06:56:47.883351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.368 qpair failed and we were unable to recover it. 00:30:43.368 [2024-04-17 06:56:47.893158] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.368 [2024-04-17 06:56:47.893311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.368 [2024-04-17 06:56:47.893337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.368 [2024-04-17 06:56:47.893352] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.368 [2024-04-17 06:56:47.893364] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.368 [2024-04-17 06:56:47.893394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.368 qpair failed and we were unable to recover it. 00:30:43.368 [2024-04-17 06:56:47.903207] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.368 [2024-04-17 06:56:47.903353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.368 [2024-04-17 06:56:47.903378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.368 [2024-04-17 06:56:47.903392] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.368 [2024-04-17 06:56:47.903405] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.368 [2024-04-17 06:56:47.903434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.368 qpair failed and we were unable to recover it. 00:30:43.368 [2024-04-17 06:56:47.913212] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.368 [2024-04-17 06:56:47.913354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.368 [2024-04-17 06:56:47.913381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.368 [2024-04-17 06:56:47.913395] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.368 [2024-04-17 06:56:47.913409] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.368 [2024-04-17 06:56:47.913438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.368 qpair failed and we were unable to recover it. 00:30:43.368 [2024-04-17 06:56:47.923276] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.368 [2024-04-17 06:56:47.923466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.368 [2024-04-17 06:56:47.923491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.368 [2024-04-17 06:56:47.923506] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.368 [2024-04-17 06:56:47.923519] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.368 [2024-04-17 06:56:47.923549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.368 qpair failed and we were unable to recover it. 00:30:43.368 [2024-04-17 06:56:47.933268] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.368 [2024-04-17 06:56:47.933418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.368 [2024-04-17 06:56:47.933444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.368 [2024-04-17 06:56:47.933458] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.368 [2024-04-17 06:56:47.933471] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.368 [2024-04-17 06:56:47.933500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.368 qpair failed and we were unable to recover it. 00:30:43.368 [2024-04-17 06:56:47.943291] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.368 [2024-04-17 06:56:47.943429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.368 [2024-04-17 06:56:47.943454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.368 [2024-04-17 06:56:47.943469] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.368 [2024-04-17 06:56:47.943481] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.368 [2024-04-17 06:56:47.943511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.368 qpair failed and we were unable to recover it. 00:30:43.368 [2024-04-17 06:56:47.953343] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.368 [2024-04-17 06:56:47.953473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.368 [2024-04-17 06:56:47.953498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.368 [2024-04-17 06:56:47.953513] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.368 [2024-04-17 06:56:47.953525] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.368 [2024-04-17 06:56:47.953554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.368 qpair failed and we were unable to recover it. 00:30:43.368 [2024-04-17 06:56:47.963382] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.368 [2024-04-17 06:56:47.963512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.368 [2024-04-17 06:56:47.963537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.368 [2024-04-17 06:56:47.963552] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.368 [2024-04-17 06:56:47.963585] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.368 [2024-04-17 06:56:47.963614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.368 qpair failed and we were unable to recover it. 00:30:43.627 [2024-04-17 06:56:47.973403] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.627 [2024-04-17 06:56:47.973540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.627 [2024-04-17 06:56:47.973566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.627 [2024-04-17 06:56:47.973580] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.627 [2024-04-17 06:56:47.973593] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.627 [2024-04-17 06:56:47.973622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.627 qpair failed and we were unable to recover it. 00:30:43.627 [2024-04-17 06:56:47.983462] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.627 [2024-04-17 06:56:47.983631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.627 [2024-04-17 06:56:47.983661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.627 [2024-04-17 06:56:47.983677] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.627 [2024-04-17 06:56:47.983690] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.627 [2024-04-17 06:56:47.983719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.627 qpair failed and we were unable to recover it. 00:30:43.627 [2024-04-17 06:56:47.993429] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.627 [2024-04-17 06:56:47.993566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.627 [2024-04-17 06:56:47.993595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.627 [2024-04-17 06:56:47.993610] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.627 [2024-04-17 06:56:47.993623] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.627 [2024-04-17 06:56:47.993652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.627 qpair failed and we were unable to recover it. 00:30:43.627 [2024-04-17 06:56:48.003485] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.627 [2024-04-17 06:56:48.003635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.627 [2024-04-17 06:56:48.003662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.627 [2024-04-17 06:56:48.003678] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.627 [2024-04-17 06:56:48.003691] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.627 [2024-04-17 06:56:48.003736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.627 qpair failed and we were unable to recover it. 00:30:43.627 [2024-04-17 06:56:48.013476] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.627 [2024-04-17 06:56:48.013613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.627 [2024-04-17 06:56:48.013640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.627 [2024-04-17 06:56:48.013655] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.627 [2024-04-17 06:56:48.013667] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.627 [2024-04-17 06:56:48.013696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.627 qpair failed and we were unable to recover it. 00:30:43.627 [2024-04-17 06:56:48.023545] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.627 [2024-04-17 06:56:48.023678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.627 [2024-04-17 06:56:48.023705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.627 [2024-04-17 06:56:48.023719] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.627 [2024-04-17 06:56:48.023732] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.627 [2024-04-17 06:56:48.023760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.627 qpair failed and we were unable to recover it. 00:30:43.627 [2024-04-17 06:56:48.033546] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.627 [2024-04-17 06:56:48.033682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.627 [2024-04-17 06:56:48.033714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.627 [2024-04-17 06:56:48.033729] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.627 [2024-04-17 06:56:48.033742] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.627 [2024-04-17 06:56:48.033771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.627 qpair failed and we were unable to recover it. 00:30:43.627 [2024-04-17 06:56:48.043591] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.627 [2024-04-17 06:56:48.043726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.627 [2024-04-17 06:56:48.043751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.627 [2024-04-17 06:56:48.043766] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.627 [2024-04-17 06:56:48.043779] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.627 [2024-04-17 06:56:48.043808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.627 qpair failed and we were unable to recover it. 00:30:43.627 [2024-04-17 06:56:48.053606] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.627 [2024-04-17 06:56:48.053736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.627 [2024-04-17 06:56:48.053762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.627 [2024-04-17 06:56:48.053783] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.627 [2024-04-17 06:56:48.053797] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.627 [2024-04-17 06:56:48.053825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.627 qpair failed and we were unable to recover it. 00:30:43.627 [2024-04-17 06:56:48.063665] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.628 [2024-04-17 06:56:48.063809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.628 [2024-04-17 06:56:48.063835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.628 [2024-04-17 06:56:48.063850] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.628 [2024-04-17 06:56:48.063863] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.628 [2024-04-17 06:56:48.063892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.628 qpair failed and we were unable to recover it. 00:30:43.628 [2024-04-17 06:56:48.073639] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.628 [2024-04-17 06:56:48.073776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.628 [2024-04-17 06:56:48.073802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.628 [2024-04-17 06:56:48.073817] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.628 [2024-04-17 06:56:48.073831] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.628 [2024-04-17 06:56:48.073860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.628 qpair failed and we were unable to recover it. 00:30:43.628 [2024-04-17 06:56:48.083700] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.628 [2024-04-17 06:56:48.083832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.628 [2024-04-17 06:56:48.083859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.628 [2024-04-17 06:56:48.083875] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.628 [2024-04-17 06:56:48.083887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.628 [2024-04-17 06:56:48.083930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.628 qpair failed and we were unable to recover it. 00:30:43.628 [2024-04-17 06:56:48.093718] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.628 [2024-04-17 06:56:48.093861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.628 [2024-04-17 06:56:48.093897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.628 [2024-04-17 06:56:48.093912] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.628 [2024-04-17 06:56:48.093925] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.628 [2024-04-17 06:56:48.093968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.628 qpair failed and we were unable to recover it. 00:30:43.628 [2024-04-17 06:56:48.103729] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.628 [2024-04-17 06:56:48.103878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.628 [2024-04-17 06:56:48.103904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.628 [2024-04-17 06:56:48.103919] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.628 [2024-04-17 06:56:48.103932] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.628 [2024-04-17 06:56:48.103961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.628 qpair failed and we were unable to recover it. 00:30:43.628 [2024-04-17 06:56:48.113873] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.628 [2024-04-17 06:56:48.114008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.628 [2024-04-17 06:56:48.114032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.628 [2024-04-17 06:56:48.114047] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.628 [2024-04-17 06:56:48.114060] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.628 [2024-04-17 06:56:48.114089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.628 qpair failed and we were unable to recover it. 00:30:43.628 [2024-04-17 06:56:48.123766] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.628 [2024-04-17 06:56:48.123911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.628 [2024-04-17 06:56:48.123938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.628 [2024-04-17 06:56:48.123952] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.628 [2024-04-17 06:56:48.123964] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.628 [2024-04-17 06:56:48.123993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.628 qpair failed and we were unable to recover it. 00:30:43.628 [2024-04-17 06:56:48.133802] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.628 [2024-04-17 06:56:48.133931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.628 [2024-04-17 06:56:48.133957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.628 [2024-04-17 06:56:48.133973] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.628 [2024-04-17 06:56:48.133986] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.628 [2024-04-17 06:56:48.134014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.628 qpair failed and we were unable to recover it. 00:30:43.628 [2024-04-17 06:56:48.143863] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.628 [2024-04-17 06:56:48.144047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.628 [2024-04-17 06:56:48.144076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.628 [2024-04-17 06:56:48.144092] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.628 [2024-04-17 06:56:48.144105] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.628 [2024-04-17 06:56:48.144134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.628 qpair failed and we were unable to recover it. 00:30:43.628 [2024-04-17 06:56:48.153881] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.628 [2024-04-17 06:56:48.154020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.628 [2024-04-17 06:56:48.154045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.628 [2024-04-17 06:56:48.154059] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.628 [2024-04-17 06:56:48.154071] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.628 [2024-04-17 06:56:48.154100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.628 qpair failed and we were unable to recover it. 00:30:43.628 [2024-04-17 06:56:48.163931] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.628 [2024-04-17 06:56:48.164107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.628 [2024-04-17 06:56:48.164132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.628 [2024-04-17 06:56:48.164147] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.628 [2024-04-17 06:56:48.164160] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.628 [2024-04-17 06:56:48.164195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.628 qpair failed and we were unable to recover it. 00:30:43.628 [2024-04-17 06:56:48.173946] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.628 [2024-04-17 06:56:48.174079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.628 [2024-04-17 06:56:48.174108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.628 [2024-04-17 06:56:48.174124] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.628 [2024-04-17 06:56:48.174136] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.628 [2024-04-17 06:56:48.174166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.628 qpair failed and we were unable to recover it. 00:30:43.628 [2024-04-17 06:56:48.183962] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.628 [2024-04-17 06:56:48.184103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.628 [2024-04-17 06:56:48.184128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.628 [2024-04-17 06:56:48.184143] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.628 [2024-04-17 06:56:48.184156] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.628 [2024-04-17 06:56:48.184201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.628 qpair failed and we were unable to recover it. 00:30:43.628 [2024-04-17 06:56:48.194061] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.628 [2024-04-17 06:56:48.194200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.629 [2024-04-17 06:56:48.194226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.629 [2024-04-17 06:56:48.194240] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.629 [2024-04-17 06:56:48.194253] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.629 [2024-04-17 06:56:48.194282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.629 qpair failed and we were unable to recover it. 00:30:43.629 [2024-04-17 06:56:48.204042] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.629 [2024-04-17 06:56:48.204173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.629 [2024-04-17 06:56:48.204205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.629 [2024-04-17 06:56:48.204220] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.629 [2024-04-17 06:56:48.204233] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.629 [2024-04-17 06:56:48.204261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.629 qpair failed and we were unable to recover it. 00:30:43.629 [2024-04-17 06:56:48.214159] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.629 [2024-04-17 06:56:48.214298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.629 [2024-04-17 06:56:48.214323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.629 [2024-04-17 06:56:48.214337] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.629 [2024-04-17 06:56:48.214350] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.629 [2024-04-17 06:56:48.214379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.629 qpair failed and we were unable to recover it. 00:30:43.629 [2024-04-17 06:56:48.224079] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.629 [2024-04-17 06:56:48.224233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.629 [2024-04-17 06:56:48.224262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.629 [2024-04-17 06:56:48.224277] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.629 [2024-04-17 06:56:48.224289] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.629 [2024-04-17 06:56:48.224319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.629 qpair failed and we were unable to recover it. 00:30:43.629 [2024-04-17 06:56:48.234096] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.629 [2024-04-17 06:56:48.234270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.629 [2024-04-17 06:56:48.234301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.887 [2024-04-17 06:56:48.234317] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.888 [2024-04-17 06:56:48.234329] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.888 [2024-04-17 06:56:48.234359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.888 qpair failed and we were unable to recover it. 00:30:43.888 [2024-04-17 06:56:48.244165] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.888 [2024-04-17 06:56:48.244309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.888 [2024-04-17 06:56:48.244335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.888 [2024-04-17 06:56:48.244350] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.888 [2024-04-17 06:56:48.244362] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.888 [2024-04-17 06:56:48.244391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.888 qpair failed and we were unable to recover it. 00:30:43.888 [2024-04-17 06:56:48.254215] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.888 [2024-04-17 06:56:48.254375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.888 [2024-04-17 06:56:48.254400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.888 [2024-04-17 06:56:48.254415] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.888 [2024-04-17 06:56:48.254427] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.888 [2024-04-17 06:56:48.254456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.888 qpair failed and we were unable to recover it. 00:30:43.888 [2024-04-17 06:56:48.264199] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.888 [2024-04-17 06:56:48.264332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.888 [2024-04-17 06:56:48.264358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.888 [2024-04-17 06:56:48.264372] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.888 [2024-04-17 06:56:48.264385] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.888 [2024-04-17 06:56:48.264414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.888 qpair failed and we were unable to recover it. 00:30:43.888 [2024-04-17 06:56:48.274254] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.888 [2024-04-17 06:56:48.274387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.888 [2024-04-17 06:56:48.274412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.888 [2024-04-17 06:56:48.274426] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.888 [2024-04-17 06:56:48.274444] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.888 [2024-04-17 06:56:48.274475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.888 qpair failed and we were unable to recover it. 00:30:43.888 [2024-04-17 06:56:48.284277] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.888 [2024-04-17 06:56:48.284411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.888 [2024-04-17 06:56:48.284436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.888 [2024-04-17 06:56:48.284451] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.888 [2024-04-17 06:56:48.284463] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.888 [2024-04-17 06:56:48.284492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.888 qpair failed and we were unable to recover it. 00:30:43.888 [2024-04-17 06:56:48.294272] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.888 [2024-04-17 06:56:48.294403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.888 [2024-04-17 06:56:48.294429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.888 [2024-04-17 06:56:48.294444] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.888 [2024-04-17 06:56:48.294467] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.888 [2024-04-17 06:56:48.294495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.888 qpair failed and we were unable to recover it. 00:30:43.888 [2024-04-17 06:56:48.304309] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.888 [2024-04-17 06:56:48.304476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.888 [2024-04-17 06:56:48.304501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.888 [2024-04-17 06:56:48.304516] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.888 [2024-04-17 06:56:48.304528] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.888 [2024-04-17 06:56:48.304558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.888 qpair failed and we were unable to recover it. 00:30:43.888 [2024-04-17 06:56:48.314367] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.888 [2024-04-17 06:56:48.314510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.888 [2024-04-17 06:56:48.314535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.888 [2024-04-17 06:56:48.314549] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.888 [2024-04-17 06:56:48.314562] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.888 [2024-04-17 06:56:48.314590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.888 qpair failed and we were unable to recover it. 00:30:43.888 [2024-04-17 06:56:48.324394] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.888 [2024-04-17 06:56:48.324528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.888 [2024-04-17 06:56:48.324553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.888 [2024-04-17 06:56:48.324568] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.888 [2024-04-17 06:56:48.324581] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.888 [2024-04-17 06:56:48.324610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.888 qpair failed and we were unable to recover it. 00:30:43.888 [2024-04-17 06:56:48.334411] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.888 [2024-04-17 06:56:48.334553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.888 [2024-04-17 06:56:48.334579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.888 [2024-04-17 06:56:48.334593] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.888 [2024-04-17 06:56:48.334606] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.888 [2024-04-17 06:56:48.334635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.888 qpair failed and we were unable to recover it. 00:30:43.888 [2024-04-17 06:56:48.344428] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.888 [2024-04-17 06:56:48.344572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.888 [2024-04-17 06:56:48.344597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.888 [2024-04-17 06:56:48.344611] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.888 [2024-04-17 06:56:48.344624] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.888 [2024-04-17 06:56:48.344653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.888 qpair failed and we were unable to recover it. 00:30:43.888 [2024-04-17 06:56:48.354547] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.888 [2024-04-17 06:56:48.354701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.888 [2024-04-17 06:56:48.354727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.888 [2024-04-17 06:56:48.354742] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.888 [2024-04-17 06:56:48.354755] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.888 [2024-04-17 06:56:48.354795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.888 qpair failed and we were unable to recover it. 00:30:43.888 [2024-04-17 06:56:48.364567] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.888 [2024-04-17 06:56:48.364718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.888 [2024-04-17 06:56:48.364743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.888 [2024-04-17 06:56:48.364757] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.888 [2024-04-17 06:56:48.364776] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.888 [2024-04-17 06:56:48.364806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.889 qpair failed and we were unable to recover it. 00:30:43.889 [2024-04-17 06:56:48.374498] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.889 [2024-04-17 06:56:48.374632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.889 [2024-04-17 06:56:48.374656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.889 [2024-04-17 06:56:48.374672] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.889 [2024-04-17 06:56:48.374684] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.889 [2024-04-17 06:56:48.374713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.889 qpair failed and we were unable to recover it. 00:30:43.889 [2024-04-17 06:56:48.384689] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.889 [2024-04-17 06:56:48.384829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.889 [2024-04-17 06:56:48.384854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.889 [2024-04-17 06:56:48.384869] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.889 [2024-04-17 06:56:48.384882] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.889 [2024-04-17 06:56:48.384922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.889 qpair failed and we were unable to recover it. 00:30:43.889 [2024-04-17 06:56:48.394564] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.889 [2024-04-17 06:56:48.394742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.889 [2024-04-17 06:56:48.394768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.889 [2024-04-17 06:56:48.394782] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.889 [2024-04-17 06:56:48.394795] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.889 [2024-04-17 06:56:48.394824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.889 qpair failed and we were unable to recover it. 00:30:43.889 [2024-04-17 06:56:48.404606] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.889 [2024-04-17 06:56:48.404742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.889 [2024-04-17 06:56:48.404768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.889 [2024-04-17 06:56:48.404782] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.889 [2024-04-17 06:56:48.404795] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.889 [2024-04-17 06:56:48.404824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.889 qpair failed and we were unable to recover it. 00:30:43.889 [2024-04-17 06:56:48.414647] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.889 [2024-04-17 06:56:48.414825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.889 [2024-04-17 06:56:48.414850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.889 [2024-04-17 06:56:48.414880] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.889 [2024-04-17 06:56:48.414894] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.889 [2024-04-17 06:56:48.414923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.889 qpair failed and we were unable to recover it. 00:30:43.889 [2024-04-17 06:56:48.424759] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.889 [2024-04-17 06:56:48.424943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.889 [2024-04-17 06:56:48.424970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.889 [2024-04-17 06:56:48.424985] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.889 [2024-04-17 06:56:48.424999] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.889 [2024-04-17 06:56:48.425041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.889 qpair failed and we were unable to recover it. 00:30:43.889 [2024-04-17 06:56:48.434672] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.889 [2024-04-17 06:56:48.434803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.889 [2024-04-17 06:56:48.434830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.889 [2024-04-17 06:56:48.434846] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.889 [2024-04-17 06:56:48.434859] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.889 [2024-04-17 06:56:48.434887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.889 qpair failed and we were unable to recover it. 00:30:43.889 [2024-04-17 06:56:48.444734] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.889 [2024-04-17 06:56:48.444889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.889 [2024-04-17 06:56:48.444915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.889 [2024-04-17 06:56:48.444930] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.889 [2024-04-17 06:56:48.444958] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.889 [2024-04-17 06:56:48.444987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.889 qpair failed and we were unable to recover it. 00:30:43.889 [2024-04-17 06:56:48.454763] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.889 [2024-04-17 06:56:48.454890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.889 [2024-04-17 06:56:48.454916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.889 [2024-04-17 06:56:48.454939] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.889 [2024-04-17 06:56:48.454953] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.889 [2024-04-17 06:56:48.454999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.889 qpair failed and we were unable to recover it. 00:30:43.889 [2024-04-17 06:56:48.464772] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.889 [2024-04-17 06:56:48.464904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.889 [2024-04-17 06:56:48.464930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.889 [2024-04-17 06:56:48.464944] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.889 [2024-04-17 06:56:48.464957] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.889 [2024-04-17 06:56:48.464986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.889 qpair failed and we were unable to recover it. 00:30:43.889 [2024-04-17 06:56:48.474894] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.889 [2024-04-17 06:56:48.475021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.889 [2024-04-17 06:56:48.475047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.889 [2024-04-17 06:56:48.475062] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.889 [2024-04-17 06:56:48.475074] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.889 [2024-04-17 06:56:48.475114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.889 qpair failed and we were unable to recover it. 00:30:43.889 [2024-04-17 06:56:48.484875] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.889 [2024-04-17 06:56:48.485013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.889 [2024-04-17 06:56:48.485039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.889 [2024-04-17 06:56:48.485054] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.889 [2024-04-17 06:56:48.485067] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:43.889 [2024-04-17 06:56:48.485111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.889 qpair failed and we were unable to recover it. 00:30:44.148 [2024-04-17 06:56:48.494864] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.148 [2024-04-17 06:56:48.495006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.148 [2024-04-17 06:56:48.495033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.148 [2024-04-17 06:56:48.495047] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.148 [2024-04-17 06:56:48.495060] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.148 [2024-04-17 06:56:48.495088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.148 qpair failed and we were unable to recover it. 00:30:44.148 [2024-04-17 06:56:48.504912] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.148 [2024-04-17 06:56:48.505072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.148 [2024-04-17 06:56:48.505099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.148 [2024-04-17 06:56:48.505114] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.148 [2024-04-17 06:56:48.505127] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.148 [2024-04-17 06:56:48.505156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.148 qpair failed and we were unable to recover it. 00:30:44.148 [2024-04-17 06:56:48.514910] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.148 [2024-04-17 06:56:48.515060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.148 [2024-04-17 06:56:48.515087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.148 [2024-04-17 06:56:48.515102] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.148 [2024-04-17 06:56:48.515115] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.148 [2024-04-17 06:56:48.515144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.148 qpair failed and we were unable to recover it. 00:30:44.148 [2024-04-17 06:56:48.524937] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.149 [2024-04-17 06:56:48.525068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.149 [2024-04-17 06:56:48.525095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.149 [2024-04-17 06:56:48.525110] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.149 [2024-04-17 06:56:48.525122] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.149 [2024-04-17 06:56:48.525151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.149 qpair failed and we were unable to recover it. 00:30:44.149 [2024-04-17 06:56:48.535049] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.149 [2024-04-17 06:56:48.535184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.149 [2024-04-17 06:56:48.535210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.149 [2024-04-17 06:56:48.535225] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.149 [2024-04-17 06:56:48.535238] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.149 [2024-04-17 06:56:48.535280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.149 qpair failed and we were unable to recover it. 00:30:44.149 [2024-04-17 06:56:48.545110] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.149 [2024-04-17 06:56:48.545253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.149 [2024-04-17 06:56:48.545285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.149 [2024-04-17 06:56:48.545301] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.149 [2024-04-17 06:56:48.545314] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.149 [2024-04-17 06:56:48.545355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.149 qpair failed and we were unable to recover it. 00:30:44.149 [2024-04-17 06:56:48.555065] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.149 [2024-04-17 06:56:48.555220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.149 [2024-04-17 06:56:48.555246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.149 [2024-04-17 06:56:48.555263] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.149 [2024-04-17 06:56:48.555276] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.149 [2024-04-17 06:56:48.555306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.149 qpair failed and we were unable to recover it. 00:30:44.149 [2024-04-17 06:56:48.565049] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.149 [2024-04-17 06:56:48.565195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.149 [2024-04-17 06:56:48.565222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.149 [2024-04-17 06:56:48.565237] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.149 [2024-04-17 06:56:48.565249] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.149 [2024-04-17 06:56:48.565278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.149 qpair failed and we were unable to recover it. 00:30:44.149 [2024-04-17 06:56:48.575108] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.149 [2024-04-17 06:56:48.575270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.149 [2024-04-17 06:56:48.575302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.149 [2024-04-17 06:56:48.575317] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.149 [2024-04-17 06:56:48.575329] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.149 [2024-04-17 06:56:48.575357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.149 qpair failed and we were unable to recover it. 00:30:44.149 [2024-04-17 06:56:48.585127] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.149 [2024-04-17 06:56:48.585292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.149 [2024-04-17 06:56:48.585318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.149 [2024-04-17 06:56:48.585332] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.149 [2024-04-17 06:56:48.585345] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.149 [2024-04-17 06:56:48.585380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.149 qpair failed and we were unable to recover it. 00:30:44.149 [2024-04-17 06:56:48.595153] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.149 [2024-04-17 06:56:48.595344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.149 [2024-04-17 06:56:48.595372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.149 [2024-04-17 06:56:48.595387] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.149 [2024-04-17 06:56:48.595401] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.149 [2024-04-17 06:56:48.595430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.149 qpair failed and we were unable to recover it. 00:30:44.149 [2024-04-17 06:56:48.605206] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.149 [2024-04-17 06:56:48.605357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.149 [2024-04-17 06:56:48.605385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.149 [2024-04-17 06:56:48.605404] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.149 [2024-04-17 06:56:48.605417] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.149 [2024-04-17 06:56:48.605447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.149 qpair failed and we were unable to recover it. 00:30:44.149 [2024-04-17 06:56:48.615216] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.149 [2024-04-17 06:56:48.615391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.149 [2024-04-17 06:56:48.615417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.149 [2024-04-17 06:56:48.615431] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.149 [2024-04-17 06:56:48.615444] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.149 [2024-04-17 06:56:48.615474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.149 qpair failed and we were unable to recover it. 00:30:44.149 [2024-04-17 06:56:48.625241] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.149 [2024-04-17 06:56:48.625401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.149 [2024-04-17 06:56:48.625426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.149 [2024-04-17 06:56:48.625441] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.149 [2024-04-17 06:56:48.625454] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.149 [2024-04-17 06:56:48.625483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.149 qpair failed and we were unable to recover it. 00:30:44.149 [2024-04-17 06:56:48.635260] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.149 [2024-04-17 06:56:48.635393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.149 [2024-04-17 06:56:48.635423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.149 [2024-04-17 06:56:48.635439] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.149 [2024-04-17 06:56:48.635452] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.149 [2024-04-17 06:56:48.635481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.149 qpair failed and we were unable to recover it. 00:30:44.149 [2024-04-17 06:56:48.645422] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.149 [2024-04-17 06:56:48.645598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.149 [2024-04-17 06:56:48.645625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.149 [2024-04-17 06:56:48.645640] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.149 [2024-04-17 06:56:48.645654] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.149 [2024-04-17 06:56:48.645709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.149 qpair failed and we were unable to recover it. 00:30:44.149 [2024-04-17 06:56:48.655411] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.149 [2024-04-17 06:56:48.655537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.149 [2024-04-17 06:56:48.655562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.149 [2024-04-17 06:56:48.655577] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.150 [2024-04-17 06:56:48.655590] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.150 [2024-04-17 06:56:48.655630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.150 qpair failed and we were unable to recover it. 00:30:44.150 [2024-04-17 06:56:48.665366] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.150 [2024-04-17 06:56:48.665507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.150 [2024-04-17 06:56:48.665533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.150 [2024-04-17 06:56:48.665548] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.150 [2024-04-17 06:56:48.665560] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.150 [2024-04-17 06:56:48.665589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.150 qpair failed and we were unable to recover it. 00:30:44.150 [2024-04-17 06:56:48.675481] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.150 [2024-04-17 06:56:48.675616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.150 [2024-04-17 06:56:48.675641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.150 [2024-04-17 06:56:48.675656] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.150 [2024-04-17 06:56:48.675668] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.150 [2024-04-17 06:56:48.675714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.150 qpair failed and we were unable to recover it. 00:30:44.150 [2024-04-17 06:56:48.685406] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.150 [2024-04-17 06:56:48.685540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.150 [2024-04-17 06:56:48.685566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.150 [2024-04-17 06:56:48.685580] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.150 [2024-04-17 06:56:48.685593] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.150 [2024-04-17 06:56:48.685621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.150 qpair failed and we were unable to recover it. 00:30:44.150 [2024-04-17 06:56:48.695519] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.150 [2024-04-17 06:56:48.695678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.150 [2024-04-17 06:56:48.695703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.150 [2024-04-17 06:56:48.695717] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.150 [2024-04-17 06:56:48.695731] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.150 [2024-04-17 06:56:48.695772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.150 qpair failed and we were unable to recover it. 00:30:44.150 [2024-04-17 06:56:48.705485] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.150 [2024-04-17 06:56:48.705623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.150 [2024-04-17 06:56:48.705647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.150 [2024-04-17 06:56:48.705662] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.150 [2024-04-17 06:56:48.705674] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.150 [2024-04-17 06:56:48.705703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.150 qpair failed and we were unable to recover it. 00:30:44.150 [2024-04-17 06:56:48.715483] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.150 [2024-04-17 06:56:48.715624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.150 [2024-04-17 06:56:48.715653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.150 [2024-04-17 06:56:48.715668] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.150 [2024-04-17 06:56:48.715681] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.150 [2024-04-17 06:56:48.715710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.150 qpair failed and we were unable to recover it. 00:30:44.150 [2024-04-17 06:56:48.725505] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.150 [2024-04-17 06:56:48.725645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.150 [2024-04-17 06:56:48.725671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.150 [2024-04-17 06:56:48.725685] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.150 [2024-04-17 06:56:48.725698] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.150 [2024-04-17 06:56:48.725741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.150 qpair failed and we were unable to recover it. 00:30:44.150 [2024-04-17 06:56:48.735521] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.150 [2024-04-17 06:56:48.735653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.150 [2024-04-17 06:56:48.735678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.150 [2024-04-17 06:56:48.735693] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.150 [2024-04-17 06:56:48.735705] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.150 [2024-04-17 06:56:48.735734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.150 qpair failed and we were unable to recover it. 00:30:44.150 [2024-04-17 06:56:48.745607] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.150 [2024-04-17 06:56:48.745792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.150 [2024-04-17 06:56:48.745817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.150 [2024-04-17 06:56:48.745832] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.150 [2024-04-17 06:56:48.745845] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.150 [2024-04-17 06:56:48.745874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.150 qpair failed and we were unable to recover it. 00:30:44.408 [2024-04-17 06:56:48.755582] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.408 [2024-04-17 06:56:48.755713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.408 [2024-04-17 06:56:48.755739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.408 [2024-04-17 06:56:48.755753] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.408 [2024-04-17 06:56:48.755766] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.408 [2024-04-17 06:56:48.755795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.408 qpair failed and we were unable to recover it. 00:30:44.408 [2024-04-17 06:56:48.765653] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.408 [2024-04-17 06:56:48.765793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.408 [2024-04-17 06:56:48.765820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.408 [2024-04-17 06:56:48.765835] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.408 [2024-04-17 06:56:48.765856] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.409 [2024-04-17 06:56:48.765901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.409 qpair failed and we were unable to recover it. 00:30:44.409 [2024-04-17 06:56:48.775731] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.409 [2024-04-17 06:56:48.775859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.409 [2024-04-17 06:56:48.775886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.409 [2024-04-17 06:56:48.775901] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.409 [2024-04-17 06:56:48.775913] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.409 [2024-04-17 06:56:48.775954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.409 qpair failed and we were unable to recover it. 00:30:44.409 [2024-04-17 06:56:48.785680] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.409 [2024-04-17 06:56:48.785812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.409 [2024-04-17 06:56:48.785839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.409 [2024-04-17 06:56:48.785854] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.409 [2024-04-17 06:56:48.785866] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.409 [2024-04-17 06:56:48.785895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.409 qpair failed and we were unable to recover it. 00:30:44.409 [2024-04-17 06:56:48.795735] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.409 [2024-04-17 06:56:48.795875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.409 [2024-04-17 06:56:48.795900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.409 [2024-04-17 06:56:48.795914] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.409 [2024-04-17 06:56:48.795926] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.409 [2024-04-17 06:56:48.795955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.409 qpair failed and we were unable to recover it. 00:30:44.409 [2024-04-17 06:56:48.805815] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.409 [2024-04-17 06:56:48.805947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.409 [2024-04-17 06:56:48.805973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.409 [2024-04-17 06:56:48.805988] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.409 [2024-04-17 06:56:48.806001] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.409 [2024-04-17 06:56:48.806042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.409 qpair failed and we were unable to recover it. 00:30:44.409 [2024-04-17 06:56:48.815802] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.409 [2024-04-17 06:56:48.815981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.409 [2024-04-17 06:56:48.816008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.409 [2024-04-17 06:56:48.816023] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.409 [2024-04-17 06:56:48.816035] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.409 [2024-04-17 06:56:48.816064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.409 qpair failed and we were unable to recover it. 00:30:44.409 [2024-04-17 06:56:48.825866] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.409 [2024-04-17 06:56:48.826039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.409 [2024-04-17 06:56:48.826066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.409 [2024-04-17 06:56:48.826080] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.409 [2024-04-17 06:56:48.826093] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.409 [2024-04-17 06:56:48.826122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.409 qpair failed and we were unable to recover it. 00:30:44.409 [2024-04-17 06:56:48.835872] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.409 [2024-04-17 06:56:48.836014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.409 [2024-04-17 06:56:48.836040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.409 [2024-04-17 06:56:48.836055] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.409 [2024-04-17 06:56:48.836067] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.409 [2024-04-17 06:56:48.836095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.409 qpair failed and we were unable to recover it. 00:30:44.409 [2024-04-17 06:56:48.845845] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.409 [2024-04-17 06:56:48.845976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.409 [2024-04-17 06:56:48.846003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.409 [2024-04-17 06:56:48.846017] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.409 [2024-04-17 06:56:48.846030] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.409 [2024-04-17 06:56:48.846058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.409 qpair failed and we were unable to recover it. 00:30:44.409 [2024-04-17 06:56:48.855874] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.409 [2024-04-17 06:56:48.856020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.409 [2024-04-17 06:56:48.856047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.409 [2024-04-17 06:56:48.856067] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.409 [2024-04-17 06:56:48.856081] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.409 [2024-04-17 06:56:48.856110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.409 qpair failed and we were unable to recover it. 00:30:44.409 [2024-04-17 06:56:48.866030] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.409 [2024-04-17 06:56:48.866201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.409 [2024-04-17 06:56:48.866231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.409 [2024-04-17 06:56:48.866248] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.409 [2024-04-17 06:56:48.866260] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.409 [2024-04-17 06:56:48.866301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.409 qpair failed and we were unable to recover it. 00:30:44.409 [2024-04-17 06:56:48.875949] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.409 [2024-04-17 06:56:48.876094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.409 [2024-04-17 06:56:48.876121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.409 [2024-04-17 06:56:48.876137] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.409 [2024-04-17 06:56:48.876149] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.409 [2024-04-17 06:56:48.876186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.409 qpair failed and we were unable to recover it. 00:30:44.409 [2024-04-17 06:56:48.885984] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.409 [2024-04-17 06:56:48.886128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.409 [2024-04-17 06:56:48.886154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.409 [2024-04-17 06:56:48.886169] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.409 [2024-04-17 06:56:48.886190] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.409 [2024-04-17 06:56:48.886220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.409 qpair failed and we were unable to recover it. 00:30:44.409 [2024-04-17 06:56:48.896021] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.409 [2024-04-17 06:56:48.896162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.409 [2024-04-17 06:56:48.896196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.409 [2024-04-17 06:56:48.896212] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.409 [2024-04-17 06:56:48.896224] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.409 [2024-04-17 06:56:48.896253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.409 qpair failed and we were unable to recover it. 00:30:44.409 [2024-04-17 06:56:48.906139] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.409 [2024-04-17 06:56:48.906287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.409 [2024-04-17 06:56:48.906315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.409 [2024-04-17 06:56:48.906330] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.409 [2024-04-17 06:56:48.906343] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.409 [2024-04-17 06:56:48.906372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.409 qpair failed and we were unable to recover it. 00:30:44.409 [2024-04-17 06:56:48.916094] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.409 [2024-04-17 06:56:48.916235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.409 [2024-04-17 06:56:48.916262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.409 [2024-04-17 06:56:48.916277] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.409 [2024-04-17 06:56:48.916290] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.409 [2024-04-17 06:56:48.916319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.409 qpair failed and we were unable to recover it. 00:30:44.409 [2024-04-17 06:56:48.926088] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.409 [2024-04-17 06:56:48.926232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.409 [2024-04-17 06:56:48.926259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.409 [2024-04-17 06:56:48.926274] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.409 [2024-04-17 06:56:48.926286] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.409 [2024-04-17 06:56:48.926315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.409 qpair failed and we were unable to recover it. 00:30:44.409 [2024-04-17 06:56:48.936119] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.409 [2024-04-17 06:56:48.936255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.409 [2024-04-17 06:56:48.936282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.409 [2024-04-17 06:56:48.936297] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.409 [2024-04-17 06:56:48.936309] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.409 [2024-04-17 06:56:48.936338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.409 qpair failed and we were unable to recover it. 00:30:44.409 [2024-04-17 06:56:48.946265] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.409 [2024-04-17 06:56:48.946401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.409 [2024-04-17 06:56:48.946428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.409 [2024-04-17 06:56:48.946448] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.409 [2024-04-17 06:56:48.946461] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.409 [2024-04-17 06:56:48.946490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.409 qpair failed and we were unable to recover it. 00:30:44.409 [2024-04-17 06:56:48.956280] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.409 [2024-04-17 06:56:48.956416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.409 [2024-04-17 06:56:48.956443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.409 [2024-04-17 06:56:48.956457] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.409 [2024-04-17 06:56:48.956470] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.409 [2024-04-17 06:56:48.956499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.409 qpair failed and we were unable to recover it. 00:30:44.409 [2024-04-17 06:56:48.966286] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.409 [2024-04-17 06:56:48.966433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.409 [2024-04-17 06:56:48.966459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.409 [2024-04-17 06:56:48.966474] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.409 [2024-04-17 06:56:48.966487] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.409 [2024-04-17 06:56:48.966531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.409 qpair failed and we were unable to recover it. 00:30:44.409 [2024-04-17 06:56:48.976320] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.409 [2024-04-17 06:56:48.976501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.409 [2024-04-17 06:56:48.976527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.409 [2024-04-17 06:56:48.976557] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.409 [2024-04-17 06:56:48.976569] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.409 [2024-04-17 06:56:48.976598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.409 qpair failed and we were unable to recover it. 00:30:44.409 [2024-04-17 06:56:48.986320] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.409 [2024-04-17 06:56:48.986453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.409 [2024-04-17 06:56:48.986478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.409 [2024-04-17 06:56:48.986493] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.409 [2024-04-17 06:56:48.986505] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.409 [2024-04-17 06:56:48.986535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.409 qpair failed and we were unable to recover it. 00:30:44.409 [2024-04-17 06:56:48.996407] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.409 [2024-04-17 06:56:48.996544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.409 [2024-04-17 06:56:48.996570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.409 [2024-04-17 06:56:48.996585] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.409 [2024-04-17 06:56:48.996597] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.409 [2024-04-17 06:56:48.996626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.409 qpair failed and we were unable to recover it. 00:30:44.409 [2024-04-17 06:56:49.006413] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.409 [2024-04-17 06:56:49.006551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.409 [2024-04-17 06:56:49.006577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.409 [2024-04-17 06:56:49.006592] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.409 [2024-04-17 06:56:49.006604] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.409 [2024-04-17 06:56:49.006633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.410 qpair failed and we were unable to recover it. 00:30:44.668 [2024-04-17 06:56:49.016422] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.668 [2024-04-17 06:56:49.016566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.668 [2024-04-17 06:56:49.016593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.668 [2024-04-17 06:56:49.016607] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.668 [2024-04-17 06:56:49.016620] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.668 [2024-04-17 06:56:49.016648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.668 qpair failed and we were unable to recover it. 00:30:44.668 [2024-04-17 06:56:49.026475] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.668 [2024-04-17 06:56:49.026606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.668 [2024-04-17 06:56:49.026632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.668 [2024-04-17 06:56:49.026647] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.668 [2024-04-17 06:56:49.026659] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.668 [2024-04-17 06:56:49.026688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.668 qpair failed and we were unable to recover it. 00:30:44.668 [2024-04-17 06:56:49.036409] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.668 [2024-04-17 06:56:49.036563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.668 [2024-04-17 06:56:49.036593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.668 [2024-04-17 06:56:49.036609] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.668 [2024-04-17 06:56:49.036621] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.668 [2024-04-17 06:56:49.036650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.668 qpair failed and we were unable to recover it. 00:30:44.668 [2024-04-17 06:56:49.046407] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.668 [2024-04-17 06:56:49.046586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.668 [2024-04-17 06:56:49.046623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.668 [2024-04-17 06:56:49.046638] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.668 [2024-04-17 06:56:49.046650] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.668 [2024-04-17 06:56:49.046679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.668 qpair failed and we were unable to recover it. 00:30:44.668 [2024-04-17 06:56:49.056487] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.668 [2024-04-17 06:56:49.056627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.668 [2024-04-17 06:56:49.056654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.668 [2024-04-17 06:56:49.056668] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.668 [2024-04-17 06:56:49.056680] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.668 [2024-04-17 06:56:49.056709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.668 qpair failed and we were unable to recover it. 00:30:44.668 [2024-04-17 06:56:49.066525] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.668 [2024-04-17 06:56:49.066667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.668 [2024-04-17 06:56:49.066693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.668 [2024-04-17 06:56:49.066708] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.668 [2024-04-17 06:56:49.066720] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.668 [2024-04-17 06:56:49.066749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.668 qpair failed and we were unable to recover it. 00:30:44.668 [2024-04-17 06:56:49.076490] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.668 [2024-04-17 06:56:49.076628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.668 [2024-04-17 06:56:49.076653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.668 [2024-04-17 06:56:49.076668] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.668 [2024-04-17 06:56:49.076680] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.668 [2024-04-17 06:56:49.076714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.669 qpair failed and we were unable to recover it. 00:30:44.669 [2024-04-17 06:56:49.086573] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.669 [2024-04-17 06:56:49.086745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.669 [2024-04-17 06:56:49.086771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.669 [2024-04-17 06:56:49.086786] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.669 [2024-04-17 06:56:49.086798] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.669 [2024-04-17 06:56:49.086827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.669 qpair failed and we were unable to recover it. 00:30:44.669 [2024-04-17 06:56:49.096567] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.669 [2024-04-17 06:56:49.096701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.669 [2024-04-17 06:56:49.096727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.669 [2024-04-17 06:56:49.096741] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.669 [2024-04-17 06:56:49.096753] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.669 [2024-04-17 06:56:49.096782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.669 qpair failed and we were unable to recover it. 00:30:44.669 [2024-04-17 06:56:49.106653] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.669 [2024-04-17 06:56:49.106800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.669 [2024-04-17 06:56:49.106826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.669 [2024-04-17 06:56:49.106840] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.669 [2024-04-17 06:56:49.106853] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.669 [2024-04-17 06:56:49.106881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.669 qpair failed and we were unable to recover it. 00:30:44.669 [2024-04-17 06:56:49.116643] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.669 [2024-04-17 06:56:49.116793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.669 [2024-04-17 06:56:49.116818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.669 [2024-04-17 06:56:49.116833] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.669 [2024-04-17 06:56:49.116845] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.669 [2024-04-17 06:56:49.116873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.669 qpair failed and we were unable to recover it. 00:30:44.669 [2024-04-17 06:56:49.126630] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.669 [2024-04-17 06:56:49.126765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.669 [2024-04-17 06:56:49.126796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.669 [2024-04-17 06:56:49.126812] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.669 [2024-04-17 06:56:49.126824] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.669 [2024-04-17 06:56:49.126853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.669 qpair failed and we were unable to recover it. 00:30:44.669 [2024-04-17 06:56:49.136704] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.669 [2024-04-17 06:56:49.136875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.669 [2024-04-17 06:56:49.136902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.669 [2024-04-17 06:56:49.136916] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.669 [2024-04-17 06:56:49.136929] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.669 [2024-04-17 06:56:49.136957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.669 qpair failed and we were unable to recover it. 00:30:44.669 [2024-04-17 06:56:49.146788] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.669 [2024-04-17 06:56:49.146952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.669 [2024-04-17 06:56:49.146977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.669 [2024-04-17 06:56:49.146993] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.669 [2024-04-17 06:56:49.147020] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.669 [2024-04-17 06:56:49.147050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.669 qpair failed and we were unable to recover it. 00:30:44.669 [2024-04-17 06:56:49.156795] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.669 [2024-04-17 06:56:49.156935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.669 [2024-04-17 06:56:49.156961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.669 [2024-04-17 06:56:49.156976] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.669 [2024-04-17 06:56:49.156990] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.669 [2024-04-17 06:56:49.157019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.669 qpair failed and we were unable to recover it. 00:30:44.669 [2024-04-17 06:56:49.166797] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.669 [2024-04-17 06:56:49.166945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.669 [2024-04-17 06:56:49.166971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.669 [2024-04-17 06:56:49.166986] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.669 [2024-04-17 06:56:49.167007] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.669 [2024-04-17 06:56:49.167052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.669 qpair failed and we were unable to recover it. 00:30:44.669 [2024-04-17 06:56:49.176810] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.669 [2024-04-17 06:56:49.176949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.669 [2024-04-17 06:56:49.176975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.669 [2024-04-17 06:56:49.176989] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.669 [2024-04-17 06:56:49.177001] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.669 [2024-04-17 06:56:49.177030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.669 qpair failed and we were unable to recover it. 00:30:44.669 [2024-04-17 06:56:49.186851] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.669 [2024-04-17 06:56:49.187014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.669 [2024-04-17 06:56:49.187040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.669 [2024-04-17 06:56:49.187055] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.669 [2024-04-17 06:56:49.187067] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.669 [2024-04-17 06:56:49.187096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.669 qpair failed and we were unable to recover it. 00:30:44.669 [2024-04-17 06:56:49.196883] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.669 [2024-04-17 06:56:49.197013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.669 [2024-04-17 06:56:49.197039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.669 [2024-04-17 06:56:49.197053] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.669 [2024-04-17 06:56:49.197066] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.669 [2024-04-17 06:56:49.197094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.669 qpair failed and we were unable to recover it. 00:30:44.669 [2024-04-17 06:56:49.206862] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.669 [2024-04-17 06:56:49.206999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.669 [2024-04-17 06:56:49.207037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.669 [2024-04-17 06:56:49.207052] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.669 [2024-04-17 06:56:49.207064] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.669 [2024-04-17 06:56:49.207093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.669 qpair failed and we were unable to recover it. 00:30:44.669 [2024-04-17 06:56:49.216926] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.669 [2024-04-17 06:56:49.217067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.670 [2024-04-17 06:56:49.217093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.670 [2024-04-17 06:56:49.217108] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.670 [2024-04-17 06:56:49.217120] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.670 [2024-04-17 06:56:49.217149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.670 qpair failed and we were unable to recover it. 00:30:44.670 [2024-04-17 06:56:49.226952] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.670 [2024-04-17 06:56:49.227091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.670 [2024-04-17 06:56:49.227118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.670 [2024-04-17 06:56:49.227133] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.670 [2024-04-17 06:56:49.227145] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.670 [2024-04-17 06:56:49.227173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.670 qpair failed and we were unable to recover it. 00:30:44.670 [2024-04-17 06:56:49.236971] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.670 [2024-04-17 06:56:49.237155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.670 [2024-04-17 06:56:49.237190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.670 [2024-04-17 06:56:49.237206] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.670 [2024-04-17 06:56:49.237219] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.670 [2024-04-17 06:56:49.237248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.670 qpair failed and we were unable to recover it. 00:30:44.670 [2024-04-17 06:56:49.246979] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.670 [2024-04-17 06:56:49.247118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.670 [2024-04-17 06:56:49.247146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.670 [2024-04-17 06:56:49.247164] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.670 [2024-04-17 06:56:49.247185] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.670 [2024-04-17 06:56:49.247219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.670 qpair failed and we were unable to recover it. 00:30:44.670 [2024-04-17 06:56:49.257018] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.670 [2024-04-17 06:56:49.257155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.670 [2024-04-17 06:56:49.257187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.670 [2024-04-17 06:56:49.257211] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.670 [2024-04-17 06:56:49.257225] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.670 [2024-04-17 06:56:49.257255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.670 qpair failed and we were unable to recover it. 00:30:44.670 [2024-04-17 06:56:49.267089] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.670 [2024-04-17 06:56:49.267235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.670 [2024-04-17 06:56:49.267261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.670 [2024-04-17 06:56:49.267276] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.670 [2024-04-17 06:56:49.267289] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.670 [2024-04-17 06:56:49.267318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.670 qpair failed and we were unable to recover it. 00:30:44.929 [2024-04-17 06:56:49.277069] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.929 [2024-04-17 06:56:49.277252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.929 [2024-04-17 06:56:49.277278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.929 [2024-04-17 06:56:49.277293] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.929 [2024-04-17 06:56:49.277305] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.929 [2024-04-17 06:56:49.277334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.929 qpair failed and we were unable to recover it. 00:30:44.929 [2024-04-17 06:56:49.287104] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.929 [2024-04-17 06:56:49.287248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.929 [2024-04-17 06:56:49.287274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.929 [2024-04-17 06:56:49.287288] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.929 [2024-04-17 06:56:49.287300] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.929 [2024-04-17 06:56:49.287329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.929 qpair failed and we were unable to recover it. 00:30:44.929 [2024-04-17 06:56:49.297159] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.929 [2024-04-17 06:56:49.297352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.929 [2024-04-17 06:56:49.297378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.929 [2024-04-17 06:56:49.297393] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.929 [2024-04-17 06:56:49.297410] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.929 [2024-04-17 06:56:49.297438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.929 qpair failed and we were unable to recover it. 00:30:44.929 [2024-04-17 06:56:49.307202] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.929 [2024-04-17 06:56:49.307346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.929 [2024-04-17 06:56:49.307372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.929 [2024-04-17 06:56:49.307387] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.929 [2024-04-17 06:56:49.307399] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.929 [2024-04-17 06:56:49.307429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.929 qpair failed and we were unable to recover it. 00:30:44.929 [2024-04-17 06:56:49.317239] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.929 [2024-04-17 06:56:49.317408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.929 [2024-04-17 06:56:49.317433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.929 [2024-04-17 06:56:49.317449] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.929 [2024-04-17 06:56:49.317463] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.930 [2024-04-17 06:56:49.317491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.930 qpair failed and we were unable to recover it. 00:30:44.930 [2024-04-17 06:56:49.327188] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.930 [2024-04-17 06:56:49.327329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.930 [2024-04-17 06:56:49.327355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.930 [2024-04-17 06:56:49.327370] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.930 [2024-04-17 06:56:49.327382] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.930 [2024-04-17 06:56:49.327410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.930 qpair failed and we were unable to recover it. 00:30:44.930 [2024-04-17 06:56:49.337241] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.930 [2024-04-17 06:56:49.337377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.930 [2024-04-17 06:56:49.337402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.930 [2024-04-17 06:56:49.337417] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.930 [2024-04-17 06:56:49.337429] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.930 [2024-04-17 06:56:49.337459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.930 qpair failed and we were unable to recover it. 00:30:44.930 [2024-04-17 06:56:49.347281] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.930 [2024-04-17 06:56:49.347432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.930 [2024-04-17 06:56:49.347457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.930 [2024-04-17 06:56:49.347478] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.930 [2024-04-17 06:56:49.347491] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.930 [2024-04-17 06:56:49.347520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.930 qpair failed and we were unable to recover it. 00:30:44.930 [2024-04-17 06:56:49.357332] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.930 [2024-04-17 06:56:49.357493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.930 [2024-04-17 06:56:49.357518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.930 [2024-04-17 06:56:49.357533] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.930 [2024-04-17 06:56:49.357545] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.930 [2024-04-17 06:56:49.357574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.930 qpair failed and we were unable to recover it. 00:30:44.930 [2024-04-17 06:56:49.367326] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.930 [2024-04-17 06:56:49.367462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.930 [2024-04-17 06:56:49.367487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.930 [2024-04-17 06:56:49.367502] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.930 [2024-04-17 06:56:49.367515] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.930 [2024-04-17 06:56:49.367544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.930 qpair failed and we were unable to recover it. 00:30:44.930 [2024-04-17 06:56:49.377323] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.930 [2024-04-17 06:56:49.377473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.930 [2024-04-17 06:56:49.377499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.930 [2024-04-17 06:56:49.377514] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.930 [2024-04-17 06:56:49.377526] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.930 [2024-04-17 06:56:49.377554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.930 qpair failed and we were unable to recover it. 00:30:44.930 [2024-04-17 06:56:49.387370] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.930 [2024-04-17 06:56:49.387548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.930 [2024-04-17 06:56:49.387574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.930 [2024-04-17 06:56:49.387589] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.930 [2024-04-17 06:56:49.387602] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.930 [2024-04-17 06:56:49.387631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.930 qpair failed and we were unable to recover it. 00:30:44.930 [2024-04-17 06:56:49.397378] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.930 [2024-04-17 06:56:49.397548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.930 [2024-04-17 06:56:49.397574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.930 [2024-04-17 06:56:49.397589] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.930 [2024-04-17 06:56:49.397601] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.930 [2024-04-17 06:56:49.397630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.930 qpair failed and we were unable to recover it. 00:30:44.930 [2024-04-17 06:56:49.407431] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.930 [2024-04-17 06:56:49.407567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.930 [2024-04-17 06:56:49.407594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.930 [2024-04-17 06:56:49.407608] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.930 [2024-04-17 06:56:49.407621] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.930 [2024-04-17 06:56:49.407650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.930 qpair failed and we were unable to recover it. 00:30:44.930 [2024-04-17 06:56:49.417434] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.930 [2024-04-17 06:56:49.417591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.930 [2024-04-17 06:56:49.417617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.930 [2024-04-17 06:56:49.417633] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.930 [2024-04-17 06:56:49.417646] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.930 [2024-04-17 06:56:49.417674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.930 qpair failed and we were unable to recover it. 00:30:44.930 [2024-04-17 06:56:49.427530] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.930 [2024-04-17 06:56:49.427672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.930 [2024-04-17 06:56:49.427698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.930 [2024-04-17 06:56:49.427713] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.930 [2024-04-17 06:56:49.427725] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.930 [2024-04-17 06:56:49.427754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.930 qpair failed and we were unable to recover it. 00:30:44.930 [2024-04-17 06:56:49.437485] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.930 [2024-04-17 06:56:49.437616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.930 [2024-04-17 06:56:49.437647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.931 [2024-04-17 06:56:49.437663] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.931 [2024-04-17 06:56:49.437675] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.931 [2024-04-17 06:56:49.437704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.931 qpair failed and we were unable to recover it. 00:30:44.931 [2024-04-17 06:56:49.447522] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.931 [2024-04-17 06:56:49.447671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.931 [2024-04-17 06:56:49.447697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.931 [2024-04-17 06:56:49.447712] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.931 [2024-04-17 06:56:49.447724] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.931 [2024-04-17 06:56:49.447753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.931 qpair failed and we were unable to recover it. 00:30:44.931 [2024-04-17 06:56:49.457563] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.931 [2024-04-17 06:56:49.457726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.931 [2024-04-17 06:56:49.457754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.931 [2024-04-17 06:56:49.457769] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.931 [2024-04-17 06:56:49.457785] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.931 [2024-04-17 06:56:49.457831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.931 qpair failed and we were unable to recover it. 00:30:44.931 [2024-04-17 06:56:49.467611] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.931 [2024-04-17 06:56:49.467783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.931 [2024-04-17 06:56:49.467810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.931 [2024-04-17 06:56:49.467825] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.931 [2024-04-17 06:56:49.467838] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.931 [2024-04-17 06:56:49.467867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.931 qpair failed and we were unable to recover it. 00:30:44.931 [2024-04-17 06:56:49.477619] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.931 [2024-04-17 06:56:49.477778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.931 [2024-04-17 06:56:49.477804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.931 [2024-04-17 06:56:49.477819] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.931 [2024-04-17 06:56:49.477831] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.931 [2024-04-17 06:56:49.477866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.931 qpair failed and we were unable to recover it. 00:30:44.931 [2024-04-17 06:56:49.487624] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.931 [2024-04-17 06:56:49.487759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.931 [2024-04-17 06:56:49.487785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.931 [2024-04-17 06:56:49.487800] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.931 [2024-04-17 06:56:49.487812] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.931 [2024-04-17 06:56:49.487841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.931 qpair failed and we were unable to recover it. 00:30:44.931 [2024-04-17 06:56:49.497689] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.931 [2024-04-17 06:56:49.497824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.931 [2024-04-17 06:56:49.497850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.931 [2024-04-17 06:56:49.497865] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.931 [2024-04-17 06:56:49.497877] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.931 [2024-04-17 06:56:49.497906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.931 qpair failed and we were unable to recover it. 00:30:44.931 [2024-04-17 06:56:49.507701] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.931 [2024-04-17 06:56:49.507841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.931 [2024-04-17 06:56:49.507866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.931 [2024-04-17 06:56:49.507881] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.931 [2024-04-17 06:56:49.507893] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.931 [2024-04-17 06:56:49.507923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.931 qpair failed and we were unable to recover it. 00:30:44.931 [2024-04-17 06:56:49.517792] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.931 [2024-04-17 06:56:49.517936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.931 [2024-04-17 06:56:49.517963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.931 [2024-04-17 06:56:49.517978] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.931 [2024-04-17 06:56:49.517990] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.931 [2024-04-17 06:56:49.518019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.931 qpair failed and we were unable to recover it. 00:30:44.931 [2024-04-17 06:56:49.527786] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.931 [2024-04-17 06:56:49.527922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.931 [2024-04-17 06:56:49.527952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.931 [2024-04-17 06:56:49.527968] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.931 [2024-04-17 06:56:49.527981] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:44.931 [2024-04-17 06:56:49.528025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:44.931 qpair failed and we were unable to recover it. 00:30:45.191 [2024-04-17 06:56:49.537788] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.191 [2024-04-17 06:56:49.537937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.191 [2024-04-17 06:56:49.537963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.191 [2024-04-17 06:56:49.537978] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.191 [2024-04-17 06:56:49.537990] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.191 [2024-04-17 06:56:49.538034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.191 qpair failed and we were unable to recover it. 00:30:45.191 [2024-04-17 06:56:49.547833] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.191 [2024-04-17 06:56:49.547979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.191 [2024-04-17 06:56:49.548006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.191 [2024-04-17 06:56:49.548021] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.191 [2024-04-17 06:56:49.548036] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.191 [2024-04-17 06:56:49.548064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.191 qpair failed and we were unable to recover it. 00:30:45.191 [2024-04-17 06:56:49.557869] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.191 [2024-04-17 06:56:49.558052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.191 [2024-04-17 06:56:49.558078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.191 [2024-04-17 06:56:49.558093] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.191 [2024-04-17 06:56:49.558106] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.191 [2024-04-17 06:56:49.558135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.191 qpair failed and we were unable to recover it. 00:30:45.191 [2024-04-17 06:56:49.567889] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.191 [2024-04-17 06:56:49.568020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.191 [2024-04-17 06:56:49.568046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.191 [2024-04-17 06:56:49.568061] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.191 [2024-04-17 06:56:49.568079] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.191 [2024-04-17 06:56:49.568108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.191 qpair failed and we were unable to recover it. 00:30:45.191 [2024-04-17 06:56:49.577897] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.191 [2024-04-17 06:56:49.578030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.191 [2024-04-17 06:56:49.578056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.191 [2024-04-17 06:56:49.578071] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.191 [2024-04-17 06:56:49.578084] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.191 [2024-04-17 06:56:49.578112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.191 qpair failed and we were unable to recover it. 00:30:45.191 [2024-04-17 06:56:49.587950] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.191 [2024-04-17 06:56:49.588125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.191 [2024-04-17 06:56:49.588150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.191 [2024-04-17 06:56:49.588167] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.191 [2024-04-17 06:56:49.588187] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.191 [2024-04-17 06:56:49.588218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.191 qpair failed and we were unable to recover it. 00:30:45.191 [2024-04-17 06:56:49.597981] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.191 [2024-04-17 06:56:49.598131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.191 [2024-04-17 06:56:49.598169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.191 [2024-04-17 06:56:49.598195] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.191 [2024-04-17 06:56:49.598209] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.191 [2024-04-17 06:56:49.598238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.191 qpair failed and we were unable to recover it. 00:30:45.191 [2024-04-17 06:56:49.608008] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.191 [2024-04-17 06:56:49.608152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.191 [2024-04-17 06:56:49.608195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.191 [2024-04-17 06:56:49.608210] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.191 [2024-04-17 06:56:49.608223] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.191 [2024-04-17 06:56:49.608252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.191 qpair failed and we were unable to recover it. 00:30:45.191 [2024-04-17 06:56:49.618056] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.191 [2024-04-17 06:56:49.618241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.191 [2024-04-17 06:56:49.618268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.191 [2024-04-17 06:56:49.618282] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.191 [2024-04-17 06:56:49.618295] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.191 [2024-04-17 06:56:49.618324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.191 qpair failed and we were unable to recover it. 00:30:45.191 [2024-04-17 06:56:49.628063] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.191 [2024-04-17 06:56:49.628238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.191 [2024-04-17 06:56:49.628263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.191 [2024-04-17 06:56:49.628278] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.191 [2024-04-17 06:56:49.628290] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.191 [2024-04-17 06:56:49.628319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.191 qpair failed and we were unable to recover it. 00:30:45.191 [2024-04-17 06:56:49.638095] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.191 [2024-04-17 06:56:49.638249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.191 [2024-04-17 06:56:49.638276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.191 [2024-04-17 06:56:49.638291] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.191 [2024-04-17 06:56:49.638303] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.191 [2024-04-17 06:56:49.638332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.191 qpair failed and we were unable to recover it. 00:30:45.191 [2024-04-17 06:56:49.648141] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.191 [2024-04-17 06:56:49.648292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.191 [2024-04-17 06:56:49.648318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.191 [2024-04-17 06:56:49.648332] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.191 [2024-04-17 06:56:49.648345] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.191 [2024-04-17 06:56:49.648374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.191 qpair failed and we were unable to recover it. 00:30:45.192 [2024-04-17 06:56:49.658182] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.192 [2024-04-17 06:56:49.658363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.192 [2024-04-17 06:56:49.658389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.192 [2024-04-17 06:56:49.658404] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.192 [2024-04-17 06:56:49.658422] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.192 [2024-04-17 06:56:49.658453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.192 qpair failed and we were unable to recover it. 00:30:45.192 [2024-04-17 06:56:49.668208] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.192 [2024-04-17 06:56:49.668348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.192 [2024-04-17 06:56:49.668374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.192 [2024-04-17 06:56:49.668389] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.192 [2024-04-17 06:56:49.668401] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.192 [2024-04-17 06:56:49.668430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.192 qpair failed and we were unable to recover it. 00:30:45.192 [2024-04-17 06:56:49.678213] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.192 [2024-04-17 06:56:49.678356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.192 [2024-04-17 06:56:49.678382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.192 [2024-04-17 06:56:49.678398] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.192 [2024-04-17 06:56:49.678410] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.192 [2024-04-17 06:56:49.678439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.192 qpair failed and we were unable to recover it. 00:30:45.192 [2024-04-17 06:56:49.688243] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.192 [2024-04-17 06:56:49.688376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.192 [2024-04-17 06:56:49.688402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.192 [2024-04-17 06:56:49.688417] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.192 [2024-04-17 06:56:49.688429] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.192 [2024-04-17 06:56:49.688472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.192 qpair failed and we were unable to recover it. 00:30:45.192 [2024-04-17 06:56:49.698291] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.192 [2024-04-17 06:56:49.698429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.192 [2024-04-17 06:56:49.698455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.192 [2024-04-17 06:56:49.698470] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.192 [2024-04-17 06:56:49.698483] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.192 [2024-04-17 06:56:49.698522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.192 qpair failed and we were unable to recover it. 00:30:45.192 [2024-04-17 06:56:49.708304] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.192 [2024-04-17 06:56:49.708457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.192 [2024-04-17 06:56:49.708483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.192 [2024-04-17 06:56:49.708498] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.192 [2024-04-17 06:56:49.708514] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.192 [2024-04-17 06:56:49.708543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.192 qpair failed and we were unable to recover it. 00:30:45.192 [2024-04-17 06:56:49.718375] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.192 [2024-04-17 06:56:49.718534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.192 [2024-04-17 06:56:49.718560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.192 [2024-04-17 06:56:49.718575] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.192 [2024-04-17 06:56:49.718587] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.192 [2024-04-17 06:56:49.718616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.192 qpair failed and we were unable to recover it. 00:30:45.192 [2024-04-17 06:56:49.728399] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.192 [2024-04-17 06:56:49.728535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.192 [2024-04-17 06:56:49.728561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.192 [2024-04-17 06:56:49.728576] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.192 [2024-04-17 06:56:49.728589] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.192 [2024-04-17 06:56:49.728617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.192 qpair failed and we were unable to recover it. 00:30:45.192 [2024-04-17 06:56:49.738438] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.192 [2024-04-17 06:56:49.738612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.192 [2024-04-17 06:56:49.738653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.192 [2024-04-17 06:56:49.738668] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.192 [2024-04-17 06:56:49.738680] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.192 [2024-04-17 06:56:49.738723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.192 qpair failed and we were unable to recover it. 00:30:45.192 [2024-04-17 06:56:49.748412] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.192 [2024-04-17 06:56:49.748547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.192 [2024-04-17 06:56:49.748573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.192 [2024-04-17 06:56:49.748593] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.192 [2024-04-17 06:56:49.748606] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.192 [2024-04-17 06:56:49.748636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.192 qpair failed and we were unable to recover it. 00:30:45.192 [2024-04-17 06:56:49.758446] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.192 [2024-04-17 06:56:49.758590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.192 [2024-04-17 06:56:49.758616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.192 [2024-04-17 06:56:49.758631] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.192 [2024-04-17 06:56:49.758643] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.192 [2024-04-17 06:56:49.758671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.192 qpair failed and we were unable to recover it. 00:30:45.192 [2024-04-17 06:56:49.768527] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.192 [2024-04-17 06:56:49.768707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.192 [2024-04-17 06:56:49.768748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.192 [2024-04-17 06:56:49.768763] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.192 [2024-04-17 06:56:49.768774] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.192 [2024-04-17 06:56:49.768817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.192 qpair failed and we were unable to recover it. 00:30:45.192 [2024-04-17 06:56:49.778532] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.192 [2024-04-17 06:56:49.778668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.192 [2024-04-17 06:56:49.778694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.192 [2024-04-17 06:56:49.778709] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.192 [2024-04-17 06:56:49.778722] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.192 [2024-04-17 06:56:49.778751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.192 qpair failed and we were unable to recover it. 00:30:45.192 [2024-04-17 06:56:49.788548] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.192 [2024-04-17 06:56:49.788711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.193 [2024-04-17 06:56:49.788736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.193 [2024-04-17 06:56:49.788752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.193 [2024-04-17 06:56:49.788765] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.193 [2024-04-17 06:56:49.788793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.193 qpair failed and we were unable to recover it. 00:30:45.452 [2024-04-17 06:56:49.798552] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.452 [2024-04-17 06:56:49.798689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.452 [2024-04-17 06:56:49.798714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.452 [2024-04-17 06:56:49.798728] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.452 [2024-04-17 06:56:49.798741] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.452 [2024-04-17 06:56:49.798770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-04-17 06:56:49.808611] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.452 [2024-04-17 06:56:49.808738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.452 [2024-04-17 06:56:49.808763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.452 [2024-04-17 06:56:49.808778] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.452 [2024-04-17 06:56:49.808790] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.452 [2024-04-17 06:56:49.808819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-04-17 06:56:49.818611] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.452 [2024-04-17 06:56:49.818746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.452 [2024-04-17 06:56:49.818770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.452 [2024-04-17 06:56:49.818785] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.452 [2024-04-17 06:56:49.818798] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.452 [2024-04-17 06:56:49.818826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-04-17 06:56:49.828690] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.452 [2024-04-17 06:56:49.828830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.452 [2024-04-17 06:56:49.828855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.452 [2024-04-17 06:56:49.828869] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.452 [2024-04-17 06:56:49.828882] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.452 [2024-04-17 06:56:49.828910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-04-17 06:56:49.838681] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.452 [2024-04-17 06:56:49.838813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.452 [2024-04-17 06:56:49.838843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.452 [2024-04-17 06:56:49.838859] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.452 [2024-04-17 06:56:49.838871] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.452 [2024-04-17 06:56:49.838900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-04-17 06:56:49.848777] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.452 [2024-04-17 06:56:49.848919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.452 [2024-04-17 06:56:49.848946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.452 [2024-04-17 06:56:49.848960] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.452 [2024-04-17 06:56:49.848973] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.452 [2024-04-17 06:56:49.849002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-04-17 06:56:49.858738] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.452 [2024-04-17 06:56:49.858870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.452 [2024-04-17 06:56:49.858895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.452 [2024-04-17 06:56:49.858911] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.452 [2024-04-17 06:56:49.858923] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.452 [2024-04-17 06:56:49.858967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-04-17 06:56:49.868848] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.452 [2024-04-17 06:56:49.868982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.452 [2024-04-17 06:56:49.869007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.452 [2024-04-17 06:56:49.869022] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.452 [2024-04-17 06:56:49.869035] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.452 [2024-04-17 06:56:49.869064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-04-17 06:56:49.878899] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.452 [2024-04-17 06:56:49.879056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.452 [2024-04-17 06:56:49.879084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.452 [2024-04-17 06:56:49.879099] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.452 [2024-04-17 06:56:49.879112] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.452 [2024-04-17 06:56:49.879148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-04-17 06:56:49.888873] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.453 [2024-04-17 06:56:49.889005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.453 [2024-04-17 06:56:49.889030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.453 [2024-04-17 06:56:49.889045] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.453 [2024-04-17 06:56:49.889058] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.453 [2024-04-17 06:56:49.889087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-04-17 06:56:49.898900] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.453 [2024-04-17 06:56:49.899040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.453 [2024-04-17 06:56:49.899065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.453 [2024-04-17 06:56:49.899080] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.453 [2024-04-17 06:56:49.899093] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.453 [2024-04-17 06:56:49.899122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-04-17 06:56:49.908887] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.453 [2024-04-17 06:56:49.909022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.453 [2024-04-17 06:56:49.909050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.453 [2024-04-17 06:56:49.909066] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.453 [2024-04-17 06:56:49.909079] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.453 [2024-04-17 06:56:49.909107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-04-17 06:56:49.918983] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.453 [2024-04-17 06:56:49.919116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.453 [2024-04-17 06:56:49.919142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.453 [2024-04-17 06:56:49.919157] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.453 [2024-04-17 06:56:49.919170] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.453 [2024-04-17 06:56:49.919208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-04-17 06:56:49.928911] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.453 [2024-04-17 06:56:49.929044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.453 [2024-04-17 06:56:49.929076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.453 [2024-04-17 06:56:49.929091] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.453 [2024-04-17 06:56:49.929103] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.453 [2024-04-17 06:56:49.929132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-04-17 06:56:49.938966] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.453 [2024-04-17 06:56:49.939095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.453 [2024-04-17 06:56:49.939121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.453 [2024-04-17 06:56:49.939136] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.453 [2024-04-17 06:56:49.939149] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.453 [2024-04-17 06:56:49.939185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-04-17 06:56:49.949089] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.453 [2024-04-17 06:56:49.949242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.453 [2024-04-17 06:56:49.949267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.453 [2024-04-17 06:56:49.949281] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.453 [2024-04-17 06:56:49.949294] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.453 [2024-04-17 06:56:49.949323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-04-17 06:56:49.958998] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.453 [2024-04-17 06:56:49.959127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.453 [2024-04-17 06:56:49.959152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.453 [2024-04-17 06:56:49.959166] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.453 [2024-04-17 06:56:49.959186] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.453 [2024-04-17 06:56:49.959217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-04-17 06:56:49.969122] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.453 [2024-04-17 06:56:49.969252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.453 [2024-04-17 06:56:49.969278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.453 [2024-04-17 06:56:49.969292] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.453 [2024-04-17 06:56:49.969312] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.453 [2024-04-17 06:56:49.969342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-04-17 06:56:49.979139] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.453 [2024-04-17 06:56:49.979275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.453 [2024-04-17 06:56:49.979301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.453 [2024-04-17 06:56:49.979315] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.453 [2024-04-17 06:56:49.979328] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.453 [2024-04-17 06:56:49.979357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-04-17 06:56:49.989091] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.453 [2024-04-17 06:56:49.989243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.453 [2024-04-17 06:56:49.989269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.453 [2024-04-17 06:56:49.989283] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.453 [2024-04-17 06:56:49.989296] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.453 [2024-04-17 06:56:49.989325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-04-17 06:56:49.999221] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.453 [2024-04-17 06:56:49.999369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.453 [2024-04-17 06:56:49.999395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.453 [2024-04-17 06:56:49.999409] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.453 [2024-04-17 06:56:49.999422] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.453 [2024-04-17 06:56:49.999450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-04-17 06:56:50.009143] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.453 [2024-04-17 06:56:50.009281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.454 [2024-04-17 06:56:50.009308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.454 [2024-04-17 06:56:50.009323] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.454 [2024-04-17 06:56:50.009336] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.454 [2024-04-17 06:56:50.009366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-04-17 06:56:50.019265] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.454 [2024-04-17 06:56:50.019418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.454 [2024-04-17 06:56:50.019449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.454 [2024-04-17 06:56:50.019464] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.454 [2024-04-17 06:56:50.019477] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.454 [2024-04-17 06:56:50.019508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-04-17 06:56:50.029239] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.454 [2024-04-17 06:56:50.029380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.454 [2024-04-17 06:56:50.029407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.454 [2024-04-17 06:56:50.029423] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.454 [2024-04-17 06:56:50.029437] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.454 [2024-04-17 06:56:50.029467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-04-17 06:56:50.039338] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.454 [2024-04-17 06:56:50.039478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.454 [2024-04-17 06:56:50.039503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.454 [2024-04-17 06:56:50.039518] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.454 [2024-04-17 06:56:50.039531] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.454 [2024-04-17 06:56:50.039559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-04-17 06:56:50.049306] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.454 [2024-04-17 06:56:50.049441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.454 [2024-04-17 06:56:50.049467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.454 [2024-04-17 06:56:50.049482] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.454 [2024-04-17 06:56:50.049494] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.454 [2024-04-17 06:56:50.049524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.712 [2024-04-17 06:56:50.059394] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.712 [2024-04-17 06:56:50.059538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.712 [2024-04-17 06:56:50.059565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.712 [2024-04-17 06:56:50.059580] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.712 [2024-04-17 06:56:50.059599] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.712 [2024-04-17 06:56:50.059629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.712 qpair failed and we were unable to recover it. 00:30:45.712 [2024-04-17 06:56:50.069549] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.712 [2024-04-17 06:56:50.069709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.712 [2024-04-17 06:56:50.069736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.712 [2024-04-17 06:56:50.069751] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.712 [2024-04-17 06:56:50.069767] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.712 [2024-04-17 06:56:50.069797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.712 qpair failed and we were unable to recover it. 00:30:45.712 [2024-04-17 06:56:50.079406] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.712 [2024-04-17 06:56:50.079567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.712 [2024-04-17 06:56:50.079593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.712 [2024-04-17 06:56:50.079611] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.713 [2024-04-17 06:56:50.079623] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.713 [2024-04-17 06:56:50.079652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.713 qpair failed and we were unable to recover it. 00:30:45.713 [2024-04-17 06:56:50.089496] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.713 [2024-04-17 06:56:50.089637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.713 [2024-04-17 06:56:50.089662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.713 [2024-04-17 06:56:50.089677] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.713 [2024-04-17 06:56:50.089689] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.713 [2024-04-17 06:56:50.089718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.713 qpair failed and we were unable to recover it. 00:30:45.713 [2024-04-17 06:56:50.099484] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.713 [2024-04-17 06:56:50.099643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.713 [2024-04-17 06:56:50.099669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.713 [2024-04-17 06:56:50.099698] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.713 [2024-04-17 06:56:50.099710] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.713 [2024-04-17 06:56:50.099739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.713 qpair failed and we were unable to recover it. 00:30:45.713 [2024-04-17 06:56:50.109479] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.713 [2024-04-17 06:56:50.109637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.713 [2024-04-17 06:56:50.109665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.713 [2024-04-17 06:56:50.109680] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.713 [2024-04-17 06:56:50.109693] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.713 [2024-04-17 06:56:50.109724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.713 qpair failed and we were unable to recover it. 00:30:45.713 [2024-04-17 06:56:50.119527] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.713 [2024-04-17 06:56:50.119665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.713 [2024-04-17 06:56:50.119691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.713 [2024-04-17 06:56:50.119706] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.713 [2024-04-17 06:56:50.119719] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.713 [2024-04-17 06:56:50.119749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.713 qpair failed and we were unable to recover it. 00:30:45.713 [2024-04-17 06:56:50.129498] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.713 [2024-04-17 06:56:50.129633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.713 [2024-04-17 06:56:50.129657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.713 [2024-04-17 06:56:50.129672] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.713 [2024-04-17 06:56:50.129684] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.713 [2024-04-17 06:56:50.129713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.713 qpair failed and we were unable to recover it. 00:30:45.713 [2024-04-17 06:56:50.139545] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.713 [2024-04-17 06:56:50.139679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.713 [2024-04-17 06:56:50.139705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.713 [2024-04-17 06:56:50.139720] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.713 [2024-04-17 06:56:50.139748] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.713 [2024-04-17 06:56:50.139777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.713 qpair failed and we were unable to recover it. 00:30:45.713 [2024-04-17 06:56:50.149574] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.713 [2024-04-17 06:56:50.149742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.713 [2024-04-17 06:56:50.149768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.713 [2024-04-17 06:56:50.149792] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.713 [2024-04-17 06:56:50.149808] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.713 [2024-04-17 06:56:50.149838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.713 qpair failed and we were unable to recover it. 00:30:45.713 [2024-04-17 06:56:50.159598] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.713 [2024-04-17 06:56:50.159754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.713 [2024-04-17 06:56:50.159780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.713 [2024-04-17 06:56:50.159795] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.713 [2024-04-17 06:56:50.159807] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.713 [2024-04-17 06:56:50.159836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.713 qpair failed and we were unable to recover it. 00:30:45.713 [2024-04-17 06:56:50.169601] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.713 [2024-04-17 06:56:50.169741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.713 [2024-04-17 06:56:50.169769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.713 [2024-04-17 06:56:50.169785] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.713 [2024-04-17 06:56:50.169797] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.713 [2024-04-17 06:56:50.169841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.713 qpair failed and we were unable to recover it. 00:30:45.713 [2024-04-17 06:56:50.179761] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.713 [2024-04-17 06:56:50.179943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.713 [2024-04-17 06:56:50.179970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.713 [2024-04-17 06:56:50.180003] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.713 [2024-04-17 06:56:50.180016] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.713 [2024-04-17 06:56:50.180045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.713 qpair failed and we were unable to recover it. 00:30:45.713 [2024-04-17 06:56:50.189655] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.713 [2024-04-17 06:56:50.189794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.713 [2024-04-17 06:56:50.189820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.713 [2024-04-17 06:56:50.189835] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.713 [2024-04-17 06:56:50.189847] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.713 [2024-04-17 06:56:50.189877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.713 qpair failed and we were unable to recover it. 00:30:45.713 [2024-04-17 06:56:50.199720] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.713 [2024-04-17 06:56:50.199855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.713 [2024-04-17 06:56:50.199882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.713 [2024-04-17 06:56:50.199896] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.713 [2024-04-17 06:56:50.199909] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.713 [2024-04-17 06:56:50.199938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.713 qpair failed and we were unable to recover it. 00:30:45.713 [2024-04-17 06:56:50.209711] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.713 [2024-04-17 06:56:50.209846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.713 [2024-04-17 06:56:50.209873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.713 [2024-04-17 06:56:50.209888] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.713 [2024-04-17 06:56:50.209901] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.713 [2024-04-17 06:56:50.209929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.714 qpair failed and we were unable to recover it. 00:30:45.714 [2024-04-17 06:56:50.219756] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.714 [2024-04-17 06:56:50.219896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.714 [2024-04-17 06:56:50.219923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.714 [2024-04-17 06:56:50.219937] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.714 [2024-04-17 06:56:50.219950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.714 [2024-04-17 06:56:50.219979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.714 qpair failed and we were unable to recover it. 00:30:45.714 [2024-04-17 06:56:50.229822] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.714 [2024-04-17 06:56:50.230005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.714 [2024-04-17 06:56:50.230031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.714 [2024-04-17 06:56:50.230046] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.714 [2024-04-17 06:56:50.230059] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.714 [2024-04-17 06:56:50.230088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.714 qpair failed and we were unable to recover it. 00:30:45.714 [2024-04-17 06:56:50.239834] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.714 [2024-04-17 06:56:50.239963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.714 [2024-04-17 06:56:50.239993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.714 [2024-04-17 06:56:50.240008] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.714 [2024-04-17 06:56:50.240021] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.714 [2024-04-17 06:56:50.240050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.714 qpair failed and we were unable to recover it. 00:30:45.714 [2024-04-17 06:56:50.249869] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.714 [2024-04-17 06:56:50.250050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.714 [2024-04-17 06:56:50.250077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.714 [2024-04-17 06:56:50.250092] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.714 [2024-04-17 06:56:50.250105] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.714 [2024-04-17 06:56:50.250135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.714 qpair failed and we were unable to recover it. 00:30:45.714 [2024-04-17 06:56:50.259936] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.714 [2024-04-17 06:56:50.260075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.714 [2024-04-17 06:56:50.260102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.714 [2024-04-17 06:56:50.260118] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.714 [2024-04-17 06:56:50.260130] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.714 [2024-04-17 06:56:50.260159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.714 qpair failed and we were unable to recover it. 00:30:45.714 [2024-04-17 06:56:50.269907] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.714 [2024-04-17 06:56:50.270042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.714 [2024-04-17 06:56:50.270069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.714 [2024-04-17 06:56:50.270084] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.714 [2024-04-17 06:56:50.270096] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.714 [2024-04-17 06:56:50.270125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.714 qpair failed and we were unable to recover it. 00:30:45.714 [2024-04-17 06:56:50.279945] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.714 [2024-04-17 06:56:50.280079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.714 [2024-04-17 06:56:50.280106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.714 [2024-04-17 06:56:50.280121] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.714 [2024-04-17 06:56:50.280134] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.714 [2024-04-17 06:56:50.280169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.714 qpair failed and we were unable to recover it. 00:30:45.714 [2024-04-17 06:56:50.289986] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.714 [2024-04-17 06:56:50.290145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.714 [2024-04-17 06:56:50.290171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.714 [2024-04-17 06:56:50.290196] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.714 [2024-04-17 06:56:50.290210] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.714 [2024-04-17 06:56:50.290239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.714 qpair failed and we were unable to recover it. 00:30:45.714 [2024-04-17 06:56:50.300032] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.714 [2024-04-17 06:56:50.300197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.714 [2024-04-17 06:56:50.300225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.714 [2024-04-17 06:56:50.300240] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.714 [2024-04-17 06:56:50.300253] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.714 [2024-04-17 06:56:50.300282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.714 qpair failed and we were unable to recover it. 00:30:45.714 [2024-04-17 06:56:50.309994] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.714 [2024-04-17 06:56:50.310133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.714 [2024-04-17 06:56:50.310160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.714 [2024-04-17 06:56:50.310184] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.714 [2024-04-17 06:56:50.310199] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.714 [2024-04-17 06:56:50.310229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.714 qpair failed and we were unable to recover it. 00:30:45.973 [2024-04-17 06:56:50.320080] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.973 [2024-04-17 06:56:50.320228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.973 [2024-04-17 06:56:50.320256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.973 [2024-04-17 06:56:50.320271] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.973 [2024-04-17 06:56:50.320284] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.973 [2024-04-17 06:56:50.320313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.973 qpair failed and we were unable to recover it. 00:30:45.973 [2024-04-17 06:56:50.330108] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.973 [2024-04-17 06:56:50.330307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.973 [2024-04-17 06:56:50.330340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.973 [2024-04-17 06:56:50.330356] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.973 [2024-04-17 06:56:50.330368] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.973 [2024-04-17 06:56:50.330398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.973 qpair failed and we were unable to recover it. 00:30:45.973 [2024-04-17 06:56:50.340125] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.973 [2024-04-17 06:56:50.340280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.973 [2024-04-17 06:56:50.340307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.973 [2024-04-17 06:56:50.340322] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.973 [2024-04-17 06:56:50.340335] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.973 [2024-04-17 06:56:50.340364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.973 qpair failed and we were unable to recover it. 00:30:45.973 [2024-04-17 06:56:50.350171] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.973 [2024-04-17 06:56:50.350321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.973 [2024-04-17 06:56:50.350348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.973 [2024-04-17 06:56:50.350363] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.973 [2024-04-17 06:56:50.350376] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.973 [2024-04-17 06:56:50.350405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.973 qpair failed and we were unable to recover it. 00:30:45.973 [2024-04-17 06:56:50.360200] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.973 [2024-04-17 06:56:50.360333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.973 [2024-04-17 06:56:50.360360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.973 [2024-04-17 06:56:50.360375] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.973 [2024-04-17 06:56:50.360388] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.973 [2024-04-17 06:56:50.360418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.973 qpair failed and we were unable to recover it. 00:30:45.973 [2024-04-17 06:56:50.370218] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.973 [2024-04-17 06:56:50.370354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.973 [2024-04-17 06:56:50.370381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.973 [2024-04-17 06:56:50.370396] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.973 [2024-04-17 06:56:50.370408] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.973 [2024-04-17 06:56:50.370447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.973 qpair failed and we were unable to recover it. 00:30:45.973 [2024-04-17 06:56:50.380220] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.973 [2024-04-17 06:56:50.380348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.973 [2024-04-17 06:56:50.380375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.973 [2024-04-17 06:56:50.380391] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.973 [2024-04-17 06:56:50.380404] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.973 [2024-04-17 06:56:50.380433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.973 qpair failed and we were unable to recover it. 00:30:45.973 [2024-04-17 06:56:50.390268] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.973 [2024-04-17 06:56:50.390432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.973 [2024-04-17 06:56:50.390458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.973 [2024-04-17 06:56:50.390473] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.973 [2024-04-17 06:56:50.390486] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.973 [2024-04-17 06:56:50.390515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.973 qpair failed and we were unable to recover it. 00:30:45.973 [2024-04-17 06:56:50.400292] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.973 [2024-04-17 06:56:50.400464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.973 [2024-04-17 06:56:50.400490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.973 [2024-04-17 06:56:50.400504] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.973 [2024-04-17 06:56:50.400517] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.973 [2024-04-17 06:56:50.400547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.973 qpair failed and we were unable to recover it. 00:30:45.973 [2024-04-17 06:56:50.410318] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.973 [2024-04-17 06:56:50.410450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.973 [2024-04-17 06:56:50.410475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.973 [2024-04-17 06:56:50.410490] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.973 [2024-04-17 06:56:50.410502] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.973 [2024-04-17 06:56:50.410531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.973 qpair failed and we were unable to recover it. 00:30:45.973 [2024-04-17 06:56:50.420352] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.973 [2024-04-17 06:56:50.420494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.974 [2024-04-17 06:56:50.420521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.974 [2024-04-17 06:56:50.420535] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.974 [2024-04-17 06:56:50.420548] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.974 [2024-04-17 06:56:50.420576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.974 qpair failed and we were unable to recover it. 00:30:45.974 [2024-04-17 06:56:50.430414] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.974 [2024-04-17 06:56:50.430610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.974 [2024-04-17 06:56:50.430636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.974 [2024-04-17 06:56:50.430651] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.974 [2024-04-17 06:56:50.430664] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.974 [2024-04-17 06:56:50.430693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.974 qpair failed and we were unable to recover it. 00:30:45.974 [2024-04-17 06:56:50.440455] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.974 [2024-04-17 06:56:50.440614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.974 [2024-04-17 06:56:50.440640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.974 [2024-04-17 06:56:50.440655] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.974 [2024-04-17 06:56:50.440668] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.974 [2024-04-17 06:56:50.440697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.974 qpair failed and we were unable to recover it. 00:30:45.974 [2024-04-17 06:56:50.450445] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.974 [2024-04-17 06:56:50.450617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.974 [2024-04-17 06:56:50.450658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.974 [2024-04-17 06:56:50.450672] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.974 [2024-04-17 06:56:50.450685] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.974 [2024-04-17 06:56:50.450714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.974 qpair failed and we were unable to recover it. 00:30:45.974 [2024-04-17 06:56:50.460440] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.974 [2024-04-17 06:56:50.460574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.974 [2024-04-17 06:56:50.460600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.974 [2024-04-17 06:56:50.460614] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.974 [2024-04-17 06:56:50.460632] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.974 [2024-04-17 06:56:50.460662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.974 qpair failed and we were unable to recover it. 00:30:45.974 [2024-04-17 06:56:50.470557] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.974 [2024-04-17 06:56:50.470695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.974 [2024-04-17 06:56:50.470721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.974 [2024-04-17 06:56:50.470735] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.974 [2024-04-17 06:56:50.470748] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.974 [2024-04-17 06:56:50.470777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.974 qpair failed and we were unable to recover it. 00:30:45.974 [2024-04-17 06:56:50.480521] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.974 [2024-04-17 06:56:50.480651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.974 [2024-04-17 06:56:50.480677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.974 [2024-04-17 06:56:50.480691] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.974 [2024-04-17 06:56:50.480704] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.974 [2024-04-17 06:56:50.480733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.974 qpair failed and we were unable to recover it. 00:30:45.974 [2024-04-17 06:56:50.490533] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.974 [2024-04-17 06:56:50.490663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.974 [2024-04-17 06:56:50.490689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.974 [2024-04-17 06:56:50.490704] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.974 [2024-04-17 06:56:50.490716] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.974 [2024-04-17 06:56:50.490746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.974 qpair failed and we were unable to recover it. 00:30:45.974 [2024-04-17 06:56:50.500577] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.974 [2024-04-17 06:56:50.500708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.974 [2024-04-17 06:56:50.500734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.974 [2024-04-17 06:56:50.500748] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.974 [2024-04-17 06:56:50.500761] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.974 [2024-04-17 06:56:50.500790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.974 qpair failed and we were unable to recover it. 00:30:45.974 [2024-04-17 06:56:50.510622] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.974 [2024-04-17 06:56:50.510764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.974 [2024-04-17 06:56:50.510789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.974 [2024-04-17 06:56:50.510804] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.974 [2024-04-17 06:56:50.510817] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.974 [2024-04-17 06:56:50.510846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.974 qpair failed and we were unable to recover it. 00:30:45.974 [2024-04-17 06:56:50.520680] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.974 [2024-04-17 06:56:50.520859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.974 [2024-04-17 06:56:50.520885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.974 [2024-04-17 06:56:50.520899] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.974 [2024-04-17 06:56:50.520912] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.974 [2024-04-17 06:56:50.520942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.974 qpair failed and we were unable to recover it. 00:30:45.974 [2024-04-17 06:56:50.530721] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.974 [2024-04-17 06:56:50.530856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.974 [2024-04-17 06:56:50.530882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.974 [2024-04-17 06:56:50.530897] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.974 [2024-04-17 06:56:50.530910] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.974 [2024-04-17 06:56:50.530939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.974 qpair failed and we were unable to recover it. 00:30:45.974 [2024-04-17 06:56:50.540735] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.974 [2024-04-17 06:56:50.540868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.974 [2024-04-17 06:56:50.540894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.974 [2024-04-17 06:56:50.540908] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.974 [2024-04-17 06:56:50.540920] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.974 [2024-04-17 06:56:50.540949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.974 qpair failed and we were unable to recover it. 00:30:45.974 [2024-04-17 06:56:50.550723] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.974 [2024-04-17 06:56:50.550907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.974 [2024-04-17 06:56:50.550933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.974 [2024-04-17 06:56:50.550953] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.975 [2024-04-17 06:56:50.550967] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.975 [2024-04-17 06:56:50.550998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.975 qpair failed and we were unable to recover it. 00:30:45.975 [2024-04-17 06:56:50.560741] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.975 [2024-04-17 06:56:50.560886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.975 [2024-04-17 06:56:50.560912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.975 [2024-04-17 06:56:50.560926] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.975 [2024-04-17 06:56:50.560938] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.975 [2024-04-17 06:56:50.560967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.975 qpair failed and we were unable to recover it. 00:30:45.975 [2024-04-17 06:56:50.570772] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.975 [2024-04-17 06:56:50.570920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.975 [2024-04-17 06:56:50.570946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.975 [2024-04-17 06:56:50.570961] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.975 [2024-04-17 06:56:50.570973] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:45.975 [2024-04-17 06:56:50.571002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.975 qpair failed and we were unable to recover it. 00:30:46.242 [2024-04-17 06:56:50.580792] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.242 [2024-04-17 06:56:50.580955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.242 [2024-04-17 06:56:50.580981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.242 [2024-04-17 06:56:50.580995] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.242 [2024-04-17 06:56:50.581008] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:46.242 [2024-04-17 06:56:50.581037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.242 qpair failed and we were unable to recover it. 00:30:46.242 [2024-04-17 06:56:50.590834] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.242 [2024-04-17 06:56:50.590970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.243 [2024-04-17 06:56:50.590996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.243 [2024-04-17 06:56:50.591010] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.243 [2024-04-17 06:56:50.591023] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:46.243 [2024-04-17 06:56:50.591052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.243 qpair failed and we were unable to recover it. 00:30:46.243 [2024-04-17 06:56:50.600893] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.243 [2024-04-17 06:56:50.601026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.243 [2024-04-17 06:56:50.601052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.243 [2024-04-17 06:56:50.601067] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.243 [2024-04-17 06:56:50.601079] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:46.243 [2024-04-17 06:56:50.601108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.243 qpair failed and we were unable to recover it. 00:30:46.243 [2024-04-17 06:56:50.610870] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.243 [2024-04-17 06:56:50.611031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.243 [2024-04-17 06:56:50.611058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.243 [2024-04-17 06:56:50.611072] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.243 [2024-04-17 06:56:50.611085] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:46.243 [2024-04-17 06:56:50.611127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.243 qpair failed and we were unable to recover it. 00:30:46.243 [2024-04-17 06:56:50.620898] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.243 [2024-04-17 06:56:50.621044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.243 [2024-04-17 06:56:50.621070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.243 [2024-04-17 06:56:50.621085] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.243 [2024-04-17 06:56:50.621097] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:46.243 [2024-04-17 06:56:50.621126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.243 qpair failed and we were unable to recover it. 00:30:46.243 [2024-04-17 06:56:50.630928] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.243 [2024-04-17 06:56:50.631063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.243 [2024-04-17 06:56:50.631088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.243 [2024-04-17 06:56:50.631103] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.243 [2024-04-17 06:56:50.631116] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:46.243 [2024-04-17 06:56:50.631146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.243 qpair failed and we were unable to recover it. 00:30:46.243 [2024-04-17 06:56:50.640979] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.243 [2024-04-17 06:56:50.641112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.243 [2024-04-17 06:56:50.641139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.243 [2024-04-17 06:56:50.641159] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.243 [2024-04-17 06:56:50.641172] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:46.243 [2024-04-17 06:56:50.641214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.243 qpair failed and we were unable to recover it. 00:30:46.243 [2024-04-17 06:56:50.651006] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.243 [2024-04-17 06:56:50.651139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.243 [2024-04-17 06:56:50.651165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.243 [2024-04-17 06:56:50.651191] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.243 [2024-04-17 06:56:50.651210] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:46.243 [2024-04-17 06:56:50.651241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.243 qpair failed and we were unable to recover it. 00:30:46.243 [2024-04-17 06:56:50.660998] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.243 [2024-04-17 06:56:50.661133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.243 [2024-04-17 06:56:50.661158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.243 [2024-04-17 06:56:50.661173] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.243 [2024-04-17 06:56:50.661195] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:46.243 [2024-04-17 06:56:50.661225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.243 qpair failed and we were unable to recover it. 00:30:46.243 [2024-04-17 06:56:50.671058] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.243 [2024-04-17 06:56:50.671211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.243 [2024-04-17 06:56:50.671237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.243 [2024-04-17 06:56:50.671252] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.243 [2024-04-17 06:56:50.671264] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:46.243 [2024-04-17 06:56:50.671306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.243 qpair failed and we were unable to recover it. 00:30:46.243 [2024-04-17 06:56:50.681064] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.243 [2024-04-17 06:56:50.681216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.243 [2024-04-17 06:56:50.681242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.243 [2024-04-17 06:56:50.681257] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.243 [2024-04-17 06:56:50.681270] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:46.243 [2024-04-17 06:56:50.681299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.243 qpair failed and we were unable to recover it. 00:30:46.243 [2024-04-17 06:56:50.691085] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.243 [2024-04-17 06:56:50.691232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.243 [2024-04-17 06:56:50.691259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.243 [2024-04-17 06:56:50.691274] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.243 [2024-04-17 06:56:50.691287] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:46.243 [2024-04-17 06:56:50.691316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.243 qpair failed and we were unable to recover it. 00:30:46.243 [2024-04-17 06:56:50.701119] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.243 [2024-04-17 06:56:50.701255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.243 [2024-04-17 06:56:50.701282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.243 [2024-04-17 06:56:50.701297] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.243 [2024-04-17 06:56:50.701309] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:46.243 [2024-04-17 06:56:50.701351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.243 qpair failed and we were unable to recover it. 00:30:46.243 [2024-04-17 06:56:50.711156] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.243 [2024-04-17 06:56:50.711297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.243 [2024-04-17 06:56:50.711324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.243 [2024-04-17 06:56:50.711339] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.243 [2024-04-17 06:56:50.711351] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:46.243 [2024-04-17 06:56:50.711381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.243 qpair failed and we were unable to recover it. 00:30:46.243 [2024-04-17 06:56:50.721192] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.243 [2024-04-17 06:56:50.721330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.243 [2024-04-17 06:56:50.721360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.244 [2024-04-17 06:56:50.721376] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.244 [2024-04-17 06:56:50.721389] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:46.244 [2024-04-17 06:56:50.721420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.244 qpair failed and we were unable to recover it. 00:30:46.244 [2024-04-17 06:56:50.731228] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.244 [2024-04-17 06:56:50.731356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.244 [2024-04-17 06:56:50.731389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.244 [2024-04-17 06:56:50.731405] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.244 [2024-04-17 06:56:50.731418] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:46.244 [2024-04-17 06:56:50.731462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.244 qpair failed and we were unable to recover it. 00:30:46.244 [2024-04-17 06:56:50.741226] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.244 [2024-04-17 06:56:50.741415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.244 [2024-04-17 06:56:50.741442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.244 [2024-04-17 06:56:50.741457] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.244 [2024-04-17 06:56:50.741470] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73ec000b90 00:30:46.244 [2024-04-17 06:56:50.741500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.244 qpair failed and we were unable to recover it. 00:30:46.244 [2024-04-17 06:56:50.751325] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.244 [2024-04-17 06:56:50.751463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.244 [2024-04-17 06:56:50.751496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.244 [2024-04-17 06:56:50.751512] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.244 [2024-04-17 06:56:50.751526] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:30:46.244 [2024-04-17 06:56:50.751557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.244 qpair failed and we were unable to recover it. 00:30:46.244 [2024-04-17 06:56:50.761324] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.244 [2024-04-17 06:56:50.761490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.244 [2024-04-17 06:56:50.761530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.244 [2024-04-17 06:56:50.761562] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.244 [2024-04-17 06:56:50.761575] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73dc000b90 00:30:46.244 [2024-04-17 06:56:50.761620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:46.244 qpair failed and we were unable to recover it. 00:30:46.244 [2024-04-17 06:56:50.771332] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.244 [2024-04-17 06:56:50.771467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.244 [2024-04-17 06:56:50.771494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.244 [2024-04-17 06:56:50.771508] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.244 [2024-04-17 06:56:50.771520] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73dc000b90 00:30:46.244 [2024-04-17 06:56:50.771556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:46.244 qpair failed and we were unable to recover it. 00:30:46.244 [2024-04-17 06:56:50.781392] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.244 [2024-04-17 06:56:50.781526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.244 [2024-04-17 06:56:50.781558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.244 [2024-04-17 06:56:50.781573] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.244 [2024-04-17 06:56:50.781586] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:30:46.244 [2024-04-17 06:56:50.781617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.244 qpair failed and we were unable to recover it. 00:30:46.244 [2024-04-17 06:56:50.781715] nvme_ctrlr.c:4340:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:30:46.244 A controller has encountered a failure and is being reset. 00:30:46.244 [2024-04-17 06:56:50.791400] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.244 [2024-04-17 06:56:50.791536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.244 [2024-04-17 06:56:50.791568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.244 [2024-04-17 06:56:50.791584] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.244 [2024-04-17 06:56:50.791597] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10df8b0 00:30:46.244 [2024-04-17 06:56:50.791627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:46.244 qpair failed and we were unable to recover it. 00:30:46.244 [2024-04-17 06:56:50.801429] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.244 [2024-04-17 06:56:50.801568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.244 [2024-04-17 06:56:50.801597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.244 [2024-04-17 06:56:50.801612] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.244 [2024-04-17 06:56:50.801624] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10df8b0 00:30:46.244 [2024-04-17 06:56:50.801652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:46.244 qpair failed and we were unable to recover it. 00:30:46.244 [2024-04-17 06:56:50.801751] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ed3a0 (9): Bad file descriptor 00:30:46.244 Controller properly reset. 00:30:46.244 Initializing NVMe Controllers 00:30:46.244 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:46.244 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:46.244 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:46.244 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:46.244 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:46.244 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:46.244 Initialization complete. Launching workers. 00:30:46.244 Starting thread on core 1 00:30:46.244 Starting thread on core 2 00:30:46.244 Starting thread on core 3 00:30:46.244 Starting thread on core 0 00:30:46.244 06:56:50 -- host/target_disconnect.sh@59 -- # sync 00:30:46.244 00:30:46.244 real 0m10.704s 00:30:46.244 user 0m17.693s 00:30:46.244 sys 0m5.399s 00:30:46.244 06:56:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:46.244 06:56:50 -- common/autotest_common.sh@10 -- # set +x 00:30:46.244 ************************************ 00:30:46.244 END TEST nvmf_target_disconnect_tc2 00:30:46.244 ************************************ 00:30:46.521 06:56:50 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:30:46.521 06:56:50 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:30:46.521 06:56:50 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:30:46.521 06:56:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:46.521 06:56:50 -- nvmf/common.sh@117 -- # sync 00:30:46.521 06:56:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:46.521 06:56:50 -- nvmf/common.sh@120 -- # set +e 00:30:46.521 06:56:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:46.521 06:56:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:46.521 rmmod nvme_tcp 00:30:46.521 rmmod nvme_fabrics 00:30:46.521 rmmod nvme_keyring 00:30:46.521 06:56:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:46.521 06:56:50 -- nvmf/common.sh@124 -- # set -e 00:30:46.521 06:56:50 -- nvmf/common.sh@125 -- # return 0 00:30:46.521 06:56:50 -- nvmf/common.sh@478 -- # '[' -n 123712 ']' 00:30:46.521 06:56:50 -- nvmf/common.sh@479 -- # killprocess 123712 00:30:46.521 06:56:50 -- common/autotest_common.sh@936 -- # '[' -z 123712 ']' 00:30:46.521 06:56:50 -- common/autotest_common.sh@940 -- # kill -0 123712 00:30:46.521 06:56:50 -- common/autotest_common.sh@941 -- # uname 00:30:46.521 06:56:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:46.521 06:56:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 123712 00:30:46.521 06:56:50 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:30:46.521 06:56:50 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:30:46.521 06:56:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 123712' 00:30:46.521 killing process with pid 123712 00:30:46.521 06:56:50 -- common/autotest_common.sh@955 -- # kill 123712 00:30:46.521 06:56:50 -- common/autotest_common.sh@960 -- # wait 123712 00:30:46.779 06:56:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:30:46.779 06:56:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:46.779 06:56:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:46.779 06:56:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:46.779 06:56:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:46.779 06:56:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.779 06:56:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:46.779 06:56:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.681 06:56:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:48.681 00:30:48.681 real 0m15.442s 00:30:48.681 user 0m43.402s 00:30:48.681 sys 0m7.362s 00:30:48.681 06:56:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:48.681 06:56:53 -- common/autotest_common.sh@10 -- # set +x 00:30:48.681 ************************************ 00:30:48.681 END TEST nvmf_target_disconnect 00:30:48.681 ************************************ 00:30:48.681 06:56:53 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:30:48.681 06:56:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:48.681 06:56:53 -- common/autotest_common.sh@10 -- # set +x 00:30:48.681 06:56:53 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:30:48.681 00:30:48.681 real 22m51.166s 00:30:48.681 user 62m30.757s 00:30:48.681 sys 5m43.590s 00:30:48.681 06:56:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:48.681 06:56:53 -- common/autotest_common.sh@10 -- # set +x 00:30:48.681 ************************************ 00:30:48.681 END TEST nvmf_tcp 00:30:48.681 ************************************ 00:30:48.681 06:56:53 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:30:48.681 06:56:53 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:48.681 06:56:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:48.681 06:56:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:48.681 06:56:53 -- common/autotest_common.sh@10 -- # set +x 00:30:48.939 ************************************ 00:30:48.939 START TEST spdkcli_nvmf_tcp 00:30:48.939 ************************************ 00:30:48.939 06:56:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:48.939 * Looking for test storage... 00:30:48.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:48.939 06:56:53 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:48.939 06:56:53 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:48.939 06:56:53 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:48.939 06:56:53 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:48.939 06:56:53 -- nvmf/common.sh@7 -- # uname -s 00:30:48.939 06:56:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:48.939 06:56:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:48.939 06:56:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:48.939 06:56:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:48.939 06:56:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:48.939 06:56:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:48.939 06:56:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:48.939 06:56:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:48.939 06:56:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:48.939 06:56:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:48.939 06:56:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:48.939 06:56:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:48.939 06:56:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:48.939 06:56:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:48.939 06:56:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:48.939 06:56:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:48.939 06:56:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:48.939 06:56:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:48.939 06:56:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:48.939 06:56:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:48.940 06:56:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.940 06:56:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.940 06:56:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.940 06:56:53 -- paths/export.sh@5 -- # export PATH 00:30:48.940 06:56:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.940 06:56:53 -- nvmf/common.sh@47 -- # : 0 00:30:48.940 06:56:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:48.940 06:56:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:48.940 06:56:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:48.940 06:56:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:48.940 06:56:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:48.940 06:56:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:48.940 06:56:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:48.940 06:56:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:48.940 06:56:53 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:48.940 06:56:53 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:48.940 06:56:53 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:48.940 06:56:53 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:48.940 06:56:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:48.940 06:56:53 -- common/autotest_common.sh@10 -- # set +x 00:30:48.940 06:56:53 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:48.940 06:56:53 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=124914 00:30:48.940 06:56:53 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:48.940 06:56:53 -- spdkcli/common.sh@34 -- # waitforlisten 124914 00:30:48.940 06:56:53 -- common/autotest_common.sh@817 -- # '[' -z 124914 ']' 00:30:48.940 06:56:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.940 06:56:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:48.940 06:56:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.940 06:56:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:48.940 06:56:53 -- common/autotest_common.sh@10 -- # set +x 00:30:48.940 [2024-04-17 06:56:53.480458] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:30:48.940 [2024-04-17 06:56:53.480568] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124914 ] 00:30:48.940 EAL: No free 2048 kB hugepages reported on node 1 00:30:49.197 [2024-04-17 06:56:53.557747] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:49.197 [2024-04-17 06:56:53.660252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:49.197 [2024-04-17 06:56:53.660261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.454 06:56:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:49.454 06:56:53 -- common/autotest_common.sh@850 -- # return 0 00:30:49.454 06:56:53 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:49.455 06:56:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:49.455 06:56:53 -- common/autotest_common.sh@10 -- # set +x 00:30:49.455 06:56:53 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:49.455 06:56:53 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:49.455 06:56:53 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:49.455 06:56:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:49.455 06:56:53 -- common/autotest_common.sh@10 -- # set +x 00:30:49.455 06:56:53 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:49.455 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:49.455 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:49.455 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:49.455 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:49.455 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:49.455 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:49.455 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:49.455 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:49.455 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:49.455 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:49.455 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:49.455 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:49.455 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:49.455 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:49.455 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:49.455 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:49.455 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:49.455 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:49.455 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:49.455 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:49.455 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:49.455 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:49.455 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:49.455 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:49.455 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:49.455 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:49.455 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:49.455 ' 00:30:49.712 [2024-04-17 06:56:54.212820] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:52.237 [2024-04-17 06:56:56.365512] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:53.168 [2024-04-17 06:56:57.589841] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:55.693 [2024-04-17 06:56:59.848799] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:57.591 [2024-04-17 06:57:01.819207] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:58.964 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:58.965 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:58.965 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:58.965 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:58.965 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:58.965 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:58.965 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:58.965 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:58.965 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:58.965 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:58.965 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:58.965 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:58.965 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:58.965 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:58.965 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:58.965 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:58.965 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:58.965 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:58.965 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:58.965 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:58.965 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:58.965 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:58.965 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:58.965 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:58.965 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:58.965 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:58.965 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:58.965 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:58.965 06:57:03 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:58.965 06:57:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:58.965 06:57:03 -- common/autotest_common.sh@10 -- # set +x 00:30:58.965 06:57:03 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:58.965 06:57:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:58.965 06:57:03 -- common/autotest_common.sh@10 -- # set +x 00:30:58.965 06:57:03 -- spdkcli/nvmf.sh@69 -- # check_match 00:30:58.965 06:57:03 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:59.530 06:57:03 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:59.530 06:57:03 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:59.530 06:57:03 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:59.530 06:57:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:59.530 06:57:03 -- common/autotest_common.sh@10 -- # set +x 00:30:59.530 06:57:03 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:59.530 06:57:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:59.530 06:57:03 -- common/autotest_common.sh@10 -- # set +x 00:30:59.530 06:57:03 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:59.530 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:59.530 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:59.530 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:59.530 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:59.530 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:59.530 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:59.530 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:59.530 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:59.530 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:59.531 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:59.531 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:59.531 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:59.531 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:59.531 ' 00:31:04.791 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:04.791 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:04.791 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:04.791 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:04.791 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:04.791 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:04.791 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:04.791 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:04.791 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:04.791 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:04.791 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:04.791 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:04.791 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:04.791 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:04.791 06:57:09 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:04.791 06:57:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:04.791 06:57:09 -- common/autotest_common.sh@10 -- # set +x 00:31:04.791 06:57:09 -- spdkcli/nvmf.sh@90 -- # killprocess 124914 00:31:04.791 06:57:09 -- common/autotest_common.sh@936 -- # '[' -z 124914 ']' 00:31:04.791 06:57:09 -- common/autotest_common.sh@940 -- # kill -0 124914 00:31:04.791 06:57:09 -- common/autotest_common.sh@941 -- # uname 00:31:04.791 06:57:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:04.791 06:57:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 124914 00:31:04.791 06:57:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:04.791 06:57:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:04.791 06:57:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 124914' 00:31:04.791 killing process with pid 124914 00:31:04.791 06:57:09 -- common/autotest_common.sh@955 -- # kill 124914 00:31:04.791 [2024-04-17 06:57:09.193937] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:31:04.791 06:57:09 -- common/autotest_common.sh@960 -- # wait 124914 00:31:05.050 06:57:09 -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:05.050 06:57:09 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:05.050 06:57:09 -- spdkcli/common.sh@13 -- # '[' -n 124914 ']' 00:31:05.050 06:57:09 -- spdkcli/common.sh@14 -- # killprocess 124914 00:31:05.050 06:57:09 -- common/autotest_common.sh@936 -- # '[' -z 124914 ']' 00:31:05.050 06:57:09 -- common/autotest_common.sh@940 -- # kill -0 124914 00:31:05.050 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (124914) - No such process 00:31:05.050 06:57:09 -- common/autotest_common.sh@963 -- # echo 'Process with pid 124914 is not found' 00:31:05.050 Process with pid 124914 is not found 00:31:05.050 06:57:09 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:05.050 06:57:09 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:05.050 06:57:09 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:05.050 00:31:05.050 real 0m16.044s 00:31:05.050 user 0m33.947s 00:31:05.050 sys 0m0.802s 00:31:05.050 06:57:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:05.050 06:57:09 -- common/autotest_common.sh@10 -- # set +x 00:31:05.050 ************************************ 00:31:05.050 END TEST spdkcli_nvmf_tcp 00:31:05.050 ************************************ 00:31:05.050 06:57:09 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:05.050 06:57:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:31:05.050 06:57:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:05.050 06:57:09 -- common/autotest_common.sh@10 -- # set +x 00:31:05.050 ************************************ 00:31:05.050 START TEST nvmf_identify_passthru 00:31:05.050 ************************************ 00:31:05.050 06:57:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:05.050 * Looking for test storage... 00:31:05.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:05.050 06:57:09 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:05.050 06:57:09 -- nvmf/common.sh@7 -- # uname -s 00:31:05.050 06:57:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:05.050 06:57:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:05.050 06:57:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:05.050 06:57:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:05.050 06:57:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:05.050 06:57:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:05.050 06:57:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:05.050 06:57:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:05.050 06:57:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:05.051 06:57:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:05.051 06:57:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:05.051 06:57:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:05.051 06:57:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:05.051 06:57:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:05.051 06:57:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:05.051 06:57:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:05.051 06:57:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:05.051 06:57:09 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:05.051 06:57:09 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:05.051 06:57:09 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:05.051 06:57:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.051 06:57:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.051 06:57:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.051 06:57:09 -- paths/export.sh@5 -- # export PATH 00:31:05.051 06:57:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.051 06:57:09 -- nvmf/common.sh@47 -- # : 0 00:31:05.051 06:57:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:05.051 06:57:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:05.051 06:57:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:05.051 06:57:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:05.051 06:57:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:05.051 06:57:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:05.051 06:57:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:05.051 06:57:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:05.051 06:57:09 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:05.051 06:57:09 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:05.051 06:57:09 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:05.051 06:57:09 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:05.051 06:57:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.051 06:57:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.051 06:57:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.051 06:57:09 -- paths/export.sh@5 -- # export PATH 00:31:05.051 06:57:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.051 06:57:09 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:05.051 06:57:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:31:05.051 06:57:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:05.051 06:57:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:31:05.051 06:57:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:31:05.051 06:57:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:31:05.051 06:57:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.051 06:57:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:05.051 06:57:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.051 06:57:09 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:31:05.051 06:57:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:31:05.051 06:57:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:31:05.051 06:57:09 -- common/autotest_common.sh@10 -- # set +x 00:31:06.977 06:57:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:06.977 06:57:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:31:06.977 06:57:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:06.977 06:57:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:06.977 06:57:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:06.977 06:57:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:06.977 06:57:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:06.977 06:57:11 -- nvmf/common.sh@295 -- # net_devs=() 00:31:06.977 06:57:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:06.977 06:57:11 -- nvmf/common.sh@296 -- # e810=() 00:31:06.977 06:57:11 -- nvmf/common.sh@296 -- # local -ga e810 00:31:06.977 06:57:11 -- nvmf/common.sh@297 -- # x722=() 00:31:06.977 06:57:11 -- nvmf/common.sh@297 -- # local -ga x722 00:31:06.977 06:57:11 -- nvmf/common.sh@298 -- # mlx=() 00:31:06.977 06:57:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:31:06.977 06:57:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:06.977 06:57:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:06.977 06:57:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:06.977 06:57:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:06.977 06:57:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:06.977 06:57:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:06.977 06:57:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:06.977 06:57:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:06.977 06:57:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:06.977 06:57:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:06.977 06:57:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:06.977 06:57:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:06.978 06:57:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:06.978 06:57:11 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:06.978 06:57:11 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:06.978 06:57:11 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:06.978 06:57:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:06.978 06:57:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:06.978 06:57:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:06.978 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:06.978 06:57:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:06.978 06:57:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:06.978 06:57:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:06.978 06:57:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:06.978 06:57:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:06.978 06:57:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:06.978 06:57:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:06.978 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:06.978 06:57:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:06.978 06:57:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:06.978 06:57:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:06.978 06:57:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:06.978 06:57:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:06.978 06:57:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:06.978 06:57:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:06.978 06:57:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:06.978 06:57:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:06.978 06:57:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.978 06:57:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:31:06.978 06:57:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.978 06:57:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:06.978 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:06.978 06:57:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.978 06:57:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:06.978 06:57:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.978 06:57:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:31:06.978 06:57:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.978 06:57:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:06.978 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:06.978 06:57:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.978 06:57:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:31:06.978 06:57:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:31:06.978 06:57:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:31:06.978 06:57:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:31:06.978 06:57:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:31:06.978 06:57:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:06.978 06:57:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:06.978 06:57:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:06.978 06:57:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:06.978 06:57:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:06.978 06:57:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:06.978 06:57:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:06.978 06:57:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:06.978 06:57:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:06.978 06:57:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:06.978 06:57:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:06.978 06:57:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:06.978 06:57:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:06.978 06:57:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:06.978 06:57:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:06.978 06:57:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:06.978 06:57:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:07.236 06:57:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:07.236 06:57:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:07.236 06:57:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:07.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:07.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:31:07.236 00:31:07.236 --- 10.0.0.2 ping statistics --- 00:31:07.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.236 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:31:07.236 06:57:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:07.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:07.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:31:07.236 00:31:07.236 --- 10.0.0.1 ping statistics --- 00:31:07.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.236 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:31:07.236 06:57:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:07.236 06:57:11 -- nvmf/common.sh@411 -- # return 0 00:31:07.236 06:57:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:31:07.236 06:57:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:07.236 06:57:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:31:07.236 06:57:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:31:07.236 06:57:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:07.236 06:57:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:31:07.236 06:57:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:31:07.236 06:57:11 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:31:07.236 06:57:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:07.236 06:57:11 -- common/autotest_common.sh@10 -- # set +x 00:31:07.236 06:57:11 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:31:07.236 06:57:11 -- common/autotest_common.sh@1510 -- # bdfs=() 00:31:07.236 06:57:11 -- common/autotest_common.sh@1510 -- # local bdfs 00:31:07.236 06:57:11 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:31:07.236 06:57:11 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:31:07.236 06:57:11 -- common/autotest_common.sh@1499 -- # bdfs=() 00:31:07.236 06:57:11 -- common/autotest_common.sh@1499 -- # local bdfs 00:31:07.236 06:57:11 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:07.236 06:57:11 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:07.236 06:57:11 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:31:07.236 06:57:11 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:31:07.236 06:57:11 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:88:00.0 00:31:07.236 06:57:11 -- common/autotest_common.sh@1513 -- # echo 0000:88:00.0 00:31:07.236 06:57:11 -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:31:07.236 06:57:11 -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:31:07.236 06:57:11 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:31:07.236 06:57:11 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:31:07.236 06:57:11 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:31:07.236 EAL: No free 2048 kB hugepages reported on node 1 00:31:11.416 06:57:15 -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:31:11.416 06:57:15 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:31:11.416 06:57:15 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:31:11.416 06:57:15 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:31:11.416 EAL: No free 2048 kB hugepages reported on node 1 00:31:15.599 06:57:20 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:31:15.599 06:57:20 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:31:15.599 06:57:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:15.599 06:57:20 -- common/autotest_common.sh@10 -- # set +x 00:31:15.599 06:57:20 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:31:15.599 06:57:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:15.599 06:57:20 -- common/autotest_common.sh@10 -- # set +x 00:31:15.599 06:57:20 -- target/identify_passthru.sh@31 -- # nvmfpid=130158 00:31:15.599 06:57:20 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:15.599 06:57:20 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:15.599 06:57:20 -- target/identify_passthru.sh@35 -- # waitforlisten 130158 00:31:15.599 06:57:20 -- common/autotest_common.sh@817 -- # '[' -z 130158 ']' 00:31:15.599 06:57:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:15.599 06:57:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:15.599 06:57:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:15.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:15.599 06:57:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:15.599 06:57:20 -- common/autotest_common.sh@10 -- # set +x 00:31:15.599 [2024-04-17 06:57:20.203555] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:31:15.599 [2024-04-17 06:57:20.203634] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:15.857 EAL: No free 2048 kB hugepages reported on node 1 00:31:15.857 [2024-04-17 06:57:20.270148] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:15.857 [2024-04-17 06:57:20.357428] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:15.857 [2024-04-17 06:57:20.357481] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:15.857 [2024-04-17 06:57:20.357522] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:15.857 [2024-04-17 06:57:20.357535] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:15.857 [2024-04-17 06:57:20.357546] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:15.857 [2024-04-17 06:57:20.357601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.857 [2024-04-17 06:57:20.357625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:15.857 [2024-04-17 06:57:20.357684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:15.857 [2024-04-17 06:57:20.357686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.857 06:57:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:15.857 06:57:20 -- common/autotest_common.sh@850 -- # return 0 00:31:15.857 06:57:20 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:31:15.857 06:57:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:15.857 06:57:20 -- common/autotest_common.sh@10 -- # set +x 00:31:15.857 INFO: Log level set to 20 00:31:15.857 INFO: Requests: 00:31:15.857 { 00:31:15.857 "jsonrpc": "2.0", 00:31:15.857 "method": "nvmf_set_config", 00:31:15.857 "id": 1, 00:31:15.857 "params": { 00:31:15.857 "admin_cmd_passthru": { 00:31:15.857 "identify_ctrlr": true 00:31:15.857 } 00:31:15.857 } 00:31:15.857 } 00:31:15.857 00:31:15.857 INFO: response: 00:31:15.857 { 00:31:15.857 "jsonrpc": "2.0", 00:31:15.857 "id": 1, 00:31:15.857 "result": true 00:31:15.857 } 00:31:15.857 00:31:15.857 06:57:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:15.857 06:57:20 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:31:15.857 06:57:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:15.857 06:57:20 -- common/autotest_common.sh@10 -- # set +x 00:31:15.857 INFO: Setting log level to 20 00:31:15.857 INFO: Setting log level to 20 00:31:15.857 INFO: Log level set to 20 00:31:15.857 INFO: Log level set to 20 00:31:15.857 INFO: Requests: 00:31:15.857 { 00:31:15.857 "jsonrpc": "2.0", 00:31:15.857 "method": "framework_start_init", 00:31:15.857 "id": 1 00:31:15.857 } 00:31:15.857 00:31:15.858 INFO: Requests: 00:31:15.858 { 00:31:15.858 "jsonrpc": "2.0", 00:31:15.858 "method": "framework_start_init", 00:31:15.858 "id": 1 00:31:15.858 } 00:31:15.858 00:31:16.115 [2024-04-17 06:57:20.525532] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:31:16.115 INFO: response: 00:31:16.115 { 00:31:16.115 "jsonrpc": "2.0", 00:31:16.115 "id": 1, 00:31:16.115 "result": true 00:31:16.115 } 00:31:16.115 00:31:16.115 INFO: response: 00:31:16.115 { 00:31:16.115 "jsonrpc": "2.0", 00:31:16.115 "id": 1, 00:31:16.115 "result": true 00:31:16.115 } 00:31:16.115 00:31:16.115 06:57:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:16.115 06:57:20 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:16.115 06:57:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:16.115 06:57:20 -- common/autotest_common.sh@10 -- # set +x 00:31:16.115 INFO: Setting log level to 40 00:31:16.115 INFO: Setting log level to 40 00:31:16.115 INFO: Setting log level to 40 00:31:16.115 [2024-04-17 06:57:20.535577] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:16.115 06:57:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:16.116 06:57:20 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:31:16.116 06:57:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:16.116 06:57:20 -- common/autotest_common.sh@10 -- # set +x 00:31:16.116 06:57:20 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:31:16.116 06:57:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:16.116 06:57:20 -- common/autotest_common.sh@10 -- # set +x 00:31:19.395 Nvme0n1 00:31:19.395 06:57:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:19.395 06:57:23 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:19.395 06:57:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:19.395 06:57:23 -- common/autotest_common.sh@10 -- # set +x 00:31:19.395 06:57:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:19.395 06:57:23 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:19.395 06:57:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:19.395 06:57:23 -- common/autotest_common.sh@10 -- # set +x 00:31:19.395 06:57:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:19.395 06:57:23 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:19.395 06:57:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:19.395 06:57:23 -- common/autotest_common.sh@10 -- # set +x 00:31:19.395 [2024-04-17 06:57:23.433275] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:19.395 06:57:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:19.395 06:57:23 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:19.395 06:57:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:19.395 06:57:23 -- common/autotest_common.sh@10 -- # set +x 00:31:19.395 [2024-04-17 06:57:23.441008] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:31:19.395 [ 00:31:19.395 { 00:31:19.395 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:19.395 "subtype": "Discovery", 00:31:19.395 "listen_addresses": [], 00:31:19.395 "allow_any_host": true, 00:31:19.395 "hosts": [] 00:31:19.395 }, 00:31:19.395 { 00:31:19.395 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:19.395 "subtype": "NVMe", 00:31:19.395 "listen_addresses": [ 00:31:19.395 { 00:31:19.395 "transport": "TCP", 00:31:19.395 "trtype": "TCP", 00:31:19.395 "adrfam": "IPv4", 00:31:19.395 "traddr": "10.0.0.2", 00:31:19.395 "trsvcid": "4420" 00:31:19.395 } 00:31:19.395 ], 00:31:19.395 "allow_any_host": true, 00:31:19.395 "hosts": [], 00:31:19.395 "serial_number": "SPDK00000000000001", 00:31:19.395 "model_number": "SPDK bdev Controller", 00:31:19.395 "max_namespaces": 1, 00:31:19.395 "min_cntlid": 1, 00:31:19.395 "max_cntlid": 65519, 00:31:19.395 "namespaces": [ 00:31:19.395 { 00:31:19.395 "nsid": 1, 00:31:19.395 "bdev_name": "Nvme0n1", 00:31:19.395 "name": "Nvme0n1", 00:31:19.395 "nguid": "DC68718DC54246CC9F938D62CCB078F0", 00:31:19.395 "uuid": "dc68718d-c542-46cc-9f93-8d62ccb078f0" 00:31:19.395 } 00:31:19.395 ] 00:31:19.395 } 00:31:19.395 ] 00:31:19.395 06:57:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:19.395 06:57:23 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:19.395 06:57:23 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:19.395 06:57:23 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:19.395 EAL: No free 2048 kB hugepages reported on node 1 00:31:19.395 06:57:23 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:31:19.395 06:57:23 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:19.395 06:57:23 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:19.395 06:57:23 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:19.395 EAL: No free 2048 kB hugepages reported on node 1 00:31:19.395 06:57:23 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:31:19.395 06:57:23 -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:31:19.395 06:57:23 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:31:19.395 06:57:23 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:19.395 06:57:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:19.395 06:57:23 -- common/autotest_common.sh@10 -- # set +x 00:31:19.395 06:57:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:19.395 06:57:23 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:19.395 06:57:23 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:19.395 06:57:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:31:19.395 06:57:23 -- nvmf/common.sh@117 -- # sync 00:31:19.395 06:57:23 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:19.395 06:57:23 -- nvmf/common.sh@120 -- # set +e 00:31:19.395 06:57:23 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:19.395 06:57:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:19.395 rmmod nvme_tcp 00:31:19.395 rmmod nvme_fabrics 00:31:19.395 rmmod nvme_keyring 00:31:19.395 06:57:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:19.395 06:57:23 -- nvmf/common.sh@124 -- # set -e 00:31:19.395 06:57:23 -- nvmf/common.sh@125 -- # return 0 00:31:19.395 06:57:23 -- nvmf/common.sh@478 -- # '[' -n 130158 ']' 00:31:19.395 06:57:23 -- nvmf/common.sh@479 -- # killprocess 130158 00:31:19.395 06:57:23 -- common/autotest_common.sh@936 -- # '[' -z 130158 ']' 00:31:19.395 06:57:23 -- common/autotest_common.sh@940 -- # kill -0 130158 00:31:19.395 06:57:23 -- common/autotest_common.sh@941 -- # uname 00:31:19.395 06:57:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:19.395 06:57:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 130158 00:31:19.395 06:57:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:19.395 06:57:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:19.395 06:57:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 130158' 00:31:19.395 killing process with pid 130158 00:31:19.395 06:57:23 -- common/autotest_common.sh@955 -- # kill 130158 00:31:19.395 [2024-04-17 06:57:23.780732] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:31:19.395 06:57:23 -- common/autotest_common.sh@960 -- # wait 130158 00:31:20.768 06:57:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:31:20.768 06:57:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:31:20.768 06:57:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:31:20.768 06:57:25 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:20.768 06:57:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:20.768 06:57:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.768 06:57:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:20.768 06:57:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.299 06:57:27 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:23.299 00:31:23.299 real 0m17.879s 00:31:23.299 user 0m26.364s 00:31:23.299 sys 0m2.320s 00:31:23.299 06:57:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:23.299 06:57:27 -- common/autotest_common.sh@10 -- # set +x 00:31:23.299 ************************************ 00:31:23.299 END TEST nvmf_identify_passthru 00:31:23.299 ************************************ 00:31:23.299 06:57:27 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:23.299 06:57:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:23.299 06:57:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:23.299 06:57:27 -- common/autotest_common.sh@10 -- # set +x 00:31:23.299 ************************************ 00:31:23.299 START TEST nvmf_dif 00:31:23.299 ************************************ 00:31:23.299 06:57:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:23.299 * Looking for test storage... 00:31:23.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:23.299 06:57:27 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:23.299 06:57:27 -- nvmf/common.sh@7 -- # uname -s 00:31:23.299 06:57:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:23.299 06:57:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:23.299 06:57:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:23.299 06:57:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:23.299 06:57:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:23.299 06:57:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:23.299 06:57:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:23.299 06:57:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:23.299 06:57:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:23.299 06:57:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:23.299 06:57:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:23.299 06:57:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:23.299 06:57:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:23.299 06:57:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:23.299 06:57:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:23.299 06:57:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:23.299 06:57:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:23.299 06:57:27 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:23.299 06:57:27 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:23.299 06:57:27 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:23.299 06:57:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.299 06:57:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.299 06:57:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.299 06:57:27 -- paths/export.sh@5 -- # export PATH 00:31:23.299 06:57:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.299 06:57:27 -- nvmf/common.sh@47 -- # : 0 00:31:23.299 06:57:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:23.299 06:57:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:23.299 06:57:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:23.299 06:57:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:23.299 06:57:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:23.299 06:57:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:23.299 06:57:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:23.299 06:57:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:23.299 06:57:27 -- target/dif.sh@15 -- # NULL_META=16 00:31:23.299 06:57:27 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:23.299 06:57:27 -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:23.299 06:57:27 -- target/dif.sh@15 -- # NULL_DIF=1 00:31:23.299 06:57:27 -- target/dif.sh@135 -- # nvmftestinit 00:31:23.299 06:57:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:31:23.299 06:57:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:23.299 06:57:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:31:23.299 06:57:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:31:23.299 06:57:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:31:23.299 06:57:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:23.299 06:57:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:23.299 06:57:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.299 06:57:27 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:31:23.299 06:57:27 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:31:23.299 06:57:27 -- nvmf/common.sh@285 -- # xtrace_disable 00:31:23.299 06:57:27 -- common/autotest_common.sh@10 -- # set +x 00:31:25.214 06:57:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:25.214 06:57:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:31:25.214 06:57:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:25.214 06:57:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:25.214 06:57:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:25.214 06:57:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:25.214 06:57:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:25.214 06:57:29 -- nvmf/common.sh@295 -- # net_devs=() 00:31:25.214 06:57:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:25.214 06:57:29 -- nvmf/common.sh@296 -- # e810=() 00:31:25.214 06:57:29 -- nvmf/common.sh@296 -- # local -ga e810 00:31:25.214 06:57:29 -- nvmf/common.sh@297 -- # x722=() 00:31:25.214 06:57:29 -- nvmf/common.sh@297 -- # local -ga x722 00:31:25.214 06:57:29 -- nvmf/common.sh@298 -- # mlx=() 00:31:25.214 06:57:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:31:25.214 06:57:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:25.214 06:57:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:25.214 06:57:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:25.214 06:57:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:25.214 06:57:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:25.214 06:57:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:25.214 06:57:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:25.214 06:57:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:25.214 06:57:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:25.214 06:57:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:25.214 06:57:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:25.214 06:57:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:25.214 06:57:29 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:25.214 06:57:29 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:25.214 06:57:29 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:25.214 06:57:29 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:25.214 06:57:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:25.214 06:57:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:25.214 06:57:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:25.214 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:25.214 06:57:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:25.214 06:57:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:25.214 06:57:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.214 06:57:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.214 06:57:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:25.214 06:57:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:25.214 06:57:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:25.214 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:25.214 06:57:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:25.214 06:57:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:25.214 06:57:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.214 06:57:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.214 06:57:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:25.214 06:57:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:25.214 06:57:29 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:25.214 06:57:29 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:25.214 06:57:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:25.214 06:57:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.214 06:57:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:31:25.214 06:57:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.214 06:57:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:25.214 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:25.214 06:57:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.214 06:57:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:25.214 06:57:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.214 06:57:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:31:25.214 06:57:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.214 06:57:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:25.214 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:25.214 06:57:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.214 06:57:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:31:25.214 06:57:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:31:25.214 06:57:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:31:25.214 06:57:29 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:31:25.214 06:57:29 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:31:25.214 06:57:29 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:25.214 06:57:29 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:25.214 06:57:29 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:25.214 06:57:29 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:25.214 06:57:29 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:25.214 06:57:29 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:25.214 06:57:29 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:25.214 06:57:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:25.214 06:57:29 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:25.214 06:57:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:25.214 06:57:29 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:25.214 06:57:29 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:25.214 06:57:29 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:25.214 06:57:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:25.214 06:57:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:25.214 06:57:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:25.214 06:57:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:25.214 06:57:29 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:25.214 06:57:29 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:25.214 06:57:29 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:25.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:25.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:31:25.214 00:31:25.214 --- 10.0.0.2 ping statistics --- 00:31:25.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.214 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:31:25.214 06:57:29 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:25.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:25.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:31:25.214 00:31:25.214 --- 10.0.0.1 ping statistics --- 00:31:25.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.214 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:31:25.214 06:57:29 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:25.214 06:57:29 -- nvmf/common.sh@411 -- # return 0 00:31:25.214 06:57:29 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:31:25.214 06:57:29 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:26.588 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:26.588 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:26.588 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:26.588 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:26.588 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:26.588 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:26.588 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:26.588 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:26.588 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:26.588 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:26.588 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:26.588 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:26.588 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:26.588 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:26.588 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:26.588 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:26.588 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:26.588 06:57:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:26.588 06:57:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:31:26.588 06:57:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:31:26.588 06:57:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:26.588 06:57:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:31:26.588 06:57:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:31:26.588 06:57:30 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:26.588 06:57:30 -- target/dif.sh@137 -- # nvmfappstart 00:31:26.588 06:57:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:31:26.588 06:57:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:26.589 06:57:30 -- common/autotest_common.sh@10 -- # set +x 00:31:26.589 06:57:30 -- nvmf/common.sh@470 -- # nvmfpid=133308 00:31:26.589 06:57:30 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:26.589 06:57:30 -- nvmf/common.sh@471 -- # waitforlisten 133308 00:31:26.589 06:57:30 -- common/autotest_common.sh@817 -- # '[' -z 133308 ']' 00:31:26.589 06:57:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:26.589 06:57:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:26.589 06:57:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:26.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:26.589 06:57:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:26.589 06:57:30 -- common/autotest_common.sh@10 -- # set +x 00:31:26.589 [2024-04-17 06:57:31.012701] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:31:26.589 [2024-04-17 06:57:31.012789] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:26.589 EAL: No free 2048 kB hugepages reported on node 1 00:31:26.589 [2024-04-17 06:57:31.078241] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:26.589 [2024-04-17 06:57:31.168047] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:26.589 [2024-04-17 06:57:31.168110] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:26.589 [2024-04-17 06:57:31.168136] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:26.589 [2024-04-17 06:57:31.168151] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:26.589 [2024-04-17 06:57:31.168163] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:26.589 [2024-04-17 06:57:31.168227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.848 06:57:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:26.848 06:57:31 -- common/autotest_common.sh@850 -- # return 0 00:31:26.848 06:57:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:31:26.848 06:57:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:26.848 06:57:31 -- common/autotest_common.sh@10 -- # set +x 00:31:26.848 06:57:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:26.848 06:57:31 -- target/dif.sh@139 -- # create_transport 00:31:26.848 06:57:31 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:26.848 06:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:26.848 06:57:31 -- common/autotest_common.sh@10 -- # set +x 00:31:26.848 [2024-04-17 06:57:31.322958] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:26.848 06:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:26.848 06:57:31 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:26.848 06:57:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:26.848 06:57:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:26.848 06:57:31 -- common/autotest_common.sh@10 -- # set +x 00:31:26.848 ************************************ 00:31:26.848 START TEST fio_dif_1_default 00:31:26.848 ************************************ 00:31:26.848 06:57:31 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:31:26.848 06:57:31 -- target/dif.sh@86 -- # create_subsystems 0 00:31:26.848 06:57:31 -- target/dif.sh@28 -- # local sub 00:31:26.848 06:57:31 -- target/dif.sh@30 -- # for sub in "$@" 00:31:26.848 06:57:31 -- target/dif.sh@31 -- # create_subsystem 0 00:31:26.848 06:57:31 -- target/dif.sh@18 -- # local sub_id=0 00:31:26.848 06:57:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:26.848 06:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:26.848 06:57:31 -- common/autotest_common.sh@10 -- # set +x 00:31:26.848 bdev_null0 00:31:26.848 06:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:26.848 06:57:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:26.848 06:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:26.848 06:57:31 -- common/autotest_common.sh@10 -- # set +x 00:31:26.848 06:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:26.848 06:57:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:26.848 06:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:26.848 06:57:31 -- common/autotest_common.sh@10 -- # set +x 00:31:26.848 06:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:26.848 06:57:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:26.848 06:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:26.848 06:57:31 -- common/autotest_common.sh@10 -- # set +x 00:31:26.848 [2024-04-17 06:57:31.447452] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:26.848 06:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:26.848 06:57:31 -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:26.848 06:57:31 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:26.848 06:57:31 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:26.848 06:57:31 -- nvmf/common.sh@521 -- # config=() 00:31:26.848 06:57:31 -- nvmf/common.sh@521 -- # local subsystem config 00:31:26.848 06:57:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:26.848 06:57:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:26.848 { 00:31:26.848 "params": { 00:31:26.848 "name": "Nvme$subsystem", 00:31:26.848 "trtype": "$TEST_TRANSPORT", 00:31:26.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:26.848 "adrfam": "ipv4", 00:31:26.848 "trsvcid": "$NVMF_PORT", 00:31:26.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:26.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:26.848 "hdgst": ${hdgst:-false}, 00:31:26.848 "ddgst": ${ddgst:-false} 00:31:26.848 }, 00:31:26.848 "method": "bdev_nvme_attach_controller" 00:31:26.848 } 00:31:26.848 EOF 00:31:26.848 )") 00:31:26.848 06:57:31 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:26.848 06:57:31 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:26.848 06:57:31 -- target/dif.sh@82 -- # gen_fio_conf 00:31:26.848 06:57:31 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:31:26.848 06:57:31 -- target/dif.sh@54 -- # local file 00:31:26.848 06:57:31 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:26.848 06:57:31 -- target/dif.sh@56 -- # cat 00:31:26.848 06:57:31 -- common/autotest_common.sh@1325 -- # local sanitizers 00:31:26.848 06:57:31 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:26.848 06:57:31 -- common/autotest_common.sh@1327 -- # shift 00:31:26.848 06:57:31 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:31:26.848 06:57:31 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:26.848 06:57:31 -- nvmf/common.sh@543 -- # cat 00:31:27.107 06:57:31 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:27.107 06:57:31 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:27.107 06:57:31 -- common/autotest_common.sh@1331 -- # grep libasan 00:31:27.107 06:57:31 -- target/dif.sh@72 -- # (( file <= files )) 00:31:27.107 06:57:31 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:27.107 06:57:31 -- nvmf/common.sh@545 -- # jq . 00:31:27.107 06:57:31 -- nvmf/common.sh@546 -- # IFS=, 00:31:27.107 06:57:31 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:31:27.107 "params": { 00:31:27.107 "name": "Nvme0", 00:31:27.107 "trtype": "tcp", 00:31:27.107 "traddr": "10.0.0.2", 00:31:27.107 "adrfam": "ipv4", 00:31:27.107 "trsvcid": "4420", 00:31:27.107 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:27.107 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:27.107 "hdgst": false, 00:31:27.107 "ddgst": false 00:31:27.107 }, 00:31:27.107 "method": "bdev_nvme_attach_controller" 00:31:27.107 }' 00:31:27.107 06:57:31 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:27.107 06:57:31 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:27.107 06:57:31 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:27.107 06:57:31 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:27.107 06:57:31 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:31:27.107 06:57:31 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:27.107 06:57:31 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:27.107 06:57:31 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:27.107 06:57:31 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:27.107 06:57:31 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:27.107 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:27.107 fio-3.35 00:31:27.107 Starting 1 thread 00:31:27.365 EAL: No free 2048 kB hugepages reported on node 1 00:31:39.556 00:31:39.556 filename0: (groupid=0, jobs=1): err= 0: pid=133543: Wed Apr 17 06:57:42 2024 00:31:39.556 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10011msec) 00:31:39.556 slat (nsec): min=4129, max=28357, avg=9154.17, stdev=2678.20 00:31:39.556 clat (usec): min=752, max=45551, avg=21091.21, stdev=20150.94 00:31:39.556 lat (usec): min=760, max=45566, avg=21100.37, stdev=20150.85 00:31:39.556 clat percentiles (usec): 00:31:39.556 | 1.00th=[ 783], 5.00th=[ 816], 10.00th=[ 832], 20.00th=[ 857], 00:31:39.556 | 30.00th=[ 865], 40.00th=[ 889], 50.00th=[41157], 60.00th=[41157], 00:31:39.556 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:39.556 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:31:39.556 | 99.99th=[45351] 00:31:39.556 bw ( KiB/s): min= 670, max= 768, per=99.79%, avg=756.70, stdev=26.35, samples=20 00:31:39.557 iops : min= 167, max= 192, avg=189.15, stdev= 6.67, samples=20 00:31:39.557 lat (usec) : 1000=49.79% 00:31:39.557 lat (msec) : 50=50.21% 00:31:39.557 cpu : usr=89.77%, sys=9.91%, ctx=37, majf=0, minf=223 00:31:39.557 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:39.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.557 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.557 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:39.557 00:31:39.557 Run status group 0 (all jobs): 00:31:39.557 READ: bw=758KiB/s (776kB/s), 758KiB/s-758KiB/s (776kB/s-776kB/s), io=7584KiB (7766kB), run=10011-10011msec 00:31:39.557 06:57:42 -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:39.557 06:57:42 -- target/dif.sh@43 -- # local sub 00:31:39.557 06:57:42 -- target/dif.sh@45 -- # for sub in "$@" 00:31:39.557 06:57:42 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:39.557 06:57:42 -- target/dif.sh@36 -- # local sub_id=0 00:31:39.557 06:57:42 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:39.557 06:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:39.557 06:57:42 -- common/autotest_common.sh@10 -- # set +x 00:31:39.557 06:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:39.557 06:57:42 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:39.557 06:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:39.557 06:57:42 -- common/autotest_common.sh@10 -- # set +x 00:31:39.557 06:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:39.557 00:31:39.557 real 0m11.004s 00:31:39.557 user 0m10.071s 00:31:39.557 sys 0m1.271s 00:31:39.557 06:57:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:39.557 06:57:42 -- common/autotest_common.sh@10 -- # set +x 00:31:39.557 ************************************ 00:31:39.557 END TEST fio_dif_1_default 00:31:39.557 ************************************ 00:31:39.557 06:57:42 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:39.557 06:57:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:39.557 06:57:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:39.557 06:57:42 -- common/autotest_common.sh@10 -- # set +x 00:31:39.557 ************************************ 00:31:39.557 START TEST fio_dif_1_multi_subsystems 00:31:39.557 ************************************ 00:31:39.557 06:57:42 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:31:39.557 06:57:42 -- target/dif.sh@92 -- # local files=1 00:31:39.557 06:57:42 -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:39.557 06:57:42 -- target/dif.sh@28 -- # local sub 00:31:39.557 06:57:42 -- target/dif.sh@30 -- # for sub in "$@" 00:31:39.557 06:57:42 -- target/dif.sh@31 -- # create_subsystem 0 00:31:39.557 06:57:42 -- target/dif.sh@18 -- # local sub_id=0 00:31:39.557 06:57:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:39.557 06:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:39.557 06:57:42 -- common/autotest_common.sh@10 -- # set +x 00:31:39.557 bdev_null0 00:31:39.557 06:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:39.557 06:57:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:39.557 06:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:39.557 06:57:42 -- common/autotest_common.sh@10 -- # set +x 00:31:39.557 06:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:39.557 06:57:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:39.557 06:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:39.557 06:57:42 -- common/autotest_common.sh@10 -- # set +x 00:31:39.557 06:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:39.557 06:57:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:39.557 06:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:39.557 06:57:42 -- common/autotest_common.sh@10 -- # set +x 00:31:39.557 [2024-04-17 06:57:42.578314] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:39.557 06:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:39.557 06:57:42 -- target/dif.sh@30 -- # for sub in "$@" 00:31:39.557 06:57:42 -- target/dif.sh@31 -- # create_subsystem 1 00:31:39.557 06:57:42 -- target/dif.sh@18 -- # local sub_id=1 00:31:39.557 06:57:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:39.557 06:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:39.557 06:57:42 -- common/autotest_common.sh@10 -- # set +x 00:31:39.557 bdev_null1 00:31:39.557 06:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:39.557 06:57:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:39.557 06:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:39.557 06:57:42 -- common/autotest_common.sh@10 -- # set +x 00:31:39.557 06:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:39.557 06:57:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:39.557 06:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:39.557 06:57:42 -- common/autotest_common.sh@10 -- # set +x 00:31:39.557 06:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:39.557 06:57:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:39.557 06:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:39.557 06:57:42 -- common/autotest_common.sh@10 -- # set +x 00:31:39.557 06:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:39.557 06:57:42 -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:39.557 06:57:42 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:39.557 06:57:42 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:39.557 06:57:42 -- nvmf/common.sh@521 -- # config=() 00:31:39.557 06:57:42 -- nvmf/common.sh@521 -- # local subsystem config 00:31:39.557 06:57:42 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:39.557 06:57:42 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:39.557 { 00:31:39.557 "params": { 00:31:39.557 "name": "Nvme$subsystem", 00:31:39.557 "trtype": "$TEST_TRANSPORT", 00:31:39.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:39.557 "adrfam": "ipv4", 00:31:39.557 "trsvcid": "$NVMF_PORT", 00:31:39.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:39.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:39.557 "hdgst": ${hdgst:-false}, 00:31:39.557 "ddgst": ${ddgst:-false} 00:31:39.557 }, 00:31:39.557 "method": "bdev_nvme_attach_controller" 00:31:39.557 } 00:31:39.557 EOF 00:31:39.557 )") 00:31:39.557 06:57:42 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:39.557 06:57:42 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:39.557 06:57:42 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:31:39.557 06:57:42 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:39.557 06:57:42 -- common/autotest_common.sh@1325 -- # local sanitizers 00:31:39.557 06:57:42 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:39.557 06:57:42 -- target/dif.sh@82 -- # gen_fio_conf 00:31:39.557 06:57:42 -- common/autotest_common.sh@1327 -- # shift 00:31:39.557 06:57:42 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:31:39.557 06:57:42 -- target/dif.sh@54 -- # local file 00:31:39.557 06:57:42 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:39.557 06:57:42 -- target/dif.sh@56 -- # cat 00:31:39.557 06:57:42 -- nvmf/common.sh@543 -- # cat 00:31:39.557 06:57:42 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:39.557 06:57:42 -- common/autotest_common.sh@1331 -- # grep libasan 00:31:39.557 06:57:42 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:39.557 06:57:42 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:39.557 06:57:42 -- target/dif.sh@72 -- # (( file <= files )) 00:31:39.557 06:57:42 -- target/dif.sh@73 -- # cat 00:31:39.557 06:57:42 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:39.557 06:57:42 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:39.557 { 00:31:39.557 "params": { 00:31:39.557 "name": "Nvme$subsystem", 00:31:39.557 "trtype": "$TEST_TRANSPORT", 00:31:39.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:39.557 "adrfam": "ipv4", 00:31:39.557 "trsvcid": "$NVMF_PORT", 00:31:39.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:39.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:39.557 "hdgst": ${hdgst:-false}, 00:31:39.557 "ddgst": ${ddgst:-false} 00:31:39.557 }, 00:31:39.557 "method": "bdev_nvme_attach_controller" 00:31:39.557 } 00:31:39.557 EOF 00:31:39.557 )") 00:31:39.557 06:57:42 -- nvmf/common.sh@543 -- # cat 00:31:39.557 06:57:42 -- target/dif.sh@72 -- # (( file++ )) 00:31:39.557 06:57:42 -- target/dif.sh@72 -- # (( file <= files )) 00:31:39.557 06:57:42 -- nvmf/common.sh@545 -- # jq . 00:31:39.557 06:57:42 -- nvmf/common.sh@546 -- # IFS=, 00:31:39.557 06:57:42 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:31:39.557 "params": { 00:31:39.557 "name": "Nvme0", 00:31:39.557 "trtype": "tcp", 00:31:39.557 "traddr": "10.0.0.2", 00:31:39.557 "adrfam": "ipv4", 00:31:39.557 "trsvcid": "4420", 00:31:39.557 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:39.557 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:39.557 "hdgst": false, 00:31:39.557 "ddgst": false 00:31:39.557 }, 00:31:39.557 "method": "bdev_nvme_attach_controller" 00:31:39.557 },{ 00:31:39.557 "params": { 00:31:39.557 "name": "Nvme1", 00:31:39.557 "trtype": "tcp", 00:31:39.557 "traddr": "10.0.0.2", 00:31:39.557 "adrfam": "ipv4", 00:31:39.557 "trsvcid": "4420", 00:31:39.557 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:39.557 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:39.557 "hdgst": false, 00:31:39.557 "ddgst": false 00:31:39.557 }, 00:31:39.557 "method": "bdev_nvme_attach_controller" 00:31:39.557 }' 00:31:39.557 06:57:42 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:39.557 06:57:42 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:39.557 06:57:42 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:39.557 06:57:42 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:39.557 06:57:42 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:31:39.557 06:57:42 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:39.557 06:57:42 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:39.557 06:57:42 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:39.557 06:57:42 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:39.557 06:57:42 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:39.557 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:39.557 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:39.557 fio-3.35 00:31:39.557 Starting 2 threads 00:31:39.557 EAL: No free 2048 kB hugepages reported on node 1 00:31:49.537 00:31:49.537 filename0: (groupid=0, jobs=1): err= 0: pid=134958: Wed Apr 17 06:57:53 2024 00:31:49.537 read: IOPS=97, BW=389KiB/s (398kB/s)(3904KiB/10037msec) 00:31:49.537 slat (nsec): min=6823, max=31973, avg=9238.09, stdev=3642.58 00:31:49.537 clat (usec): min=40856, max=43631, avg=41105.46, stdev=358.41 00:31:49.537 lat (usec): min=40864, max=43663, avg=41114.70, stdev=359.12 00:31:49.537 clat percentiles (usec): 00:31:49.537 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:49.537 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:49.537 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:31:49.537 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:31:49.537 | 99.99th=[43779] 00:31:49.537 bw ( KiB/s): min= 384, max= 416, per=33.99%, avg=388.80, stdev=11.72, samples=20 00:31:49.537 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:31:49.537 lat (msec) : 50=100.00% 00:31:49.537 cpu : usr=94.96%, sys=4.74%, ctx=15, majf=0, minf=119 00:31:49.537 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:49.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.537 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.537 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:49.537 filename1: (groupid=0, jobs=1): err= 0: pid=134959: Wed Apr 17 06:57:53 2024 00:31:49.537 read: IOPS=188, BW=755KiB/s (773kB/s)(7552KiB/10003msec) 00:31:49.537 slat (nsec): min=6774, max=55553, avg=9040.27, stdev=3677.74 00:31:49.537 clat (usec): min=855, max=44613, avg=21164.49, stdev=20159.00 00:31:49.537 lat (usec): min=862, max=44653, avg=21173.53, stdev=20158.50 00:31:49.537 clat percentiles (usec): 00:31:49.537 | 1.00th=[ 865], 5.00th=[ 889], 10.00th=[ 906], 20.00th=[ 922], 00:31:49.537 | 30.00th=[ 930], 40.00th=[ 947], 50.00th=[41157], 60.00th=[41157], 00:31:49.537 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:31:49.537 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:31:49.537 | 99.99th=[44827] 00:31:49.537 bw ( KiB/s): min= 672, max= 768, per=65.97%, avg=753.60, stdev=30.22, samples=20 00:31:49.537 iops : min= 168, max= 192, avg=188.40, stdev= 7.56, samples=20 00:31:49.537 lat (usec) : 1000=49.10% 00:31:49.537 lat (msec) : 2=0.69%, 50=50.21% 00:31:49.537 cpu : usr=93.84%, sys=5.86%, ctx=21, majf=0, minf=191 00:31:49.537 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:49.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.538 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.538 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.538 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:49.538 00:31:49.538 Run status group 0 (all jobs): 00:31:49.538 READ: bw=1141KiB/s (1169kB/s), 389KiB/s-755KiB/s (398kB/s-773kB/s), io=11.2MiB (11.7MB), run=10003-10037msec 00:31:49.538 06:57:54 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:49.538 06:57:54 -- target/dif.sh@43 -- # local sub 00:31:49.538 06:57:54 -- target/dif.sh@45 -- # for sub in "$@" 00:31:49.538 06:57:54 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:49.538 06:57:54 -- target/dif.sh@36 -- # local sub_id=0 00:31:49.538 06:57:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:49.538 06:57:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:49.538 06:57:54 -- common/autotest_common.sh@10 -- # set +x 00:31:49.538 06:57:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:49.538 06:57:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:49.538 06:57:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:49.538 06:57:54 -- common/autotest_common.sh@10 -- # set +x 00:31:49.538 06:57:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:49.538 06:57:54 -- target/dif.sh@45 -- # for sub in "$@" 00:31:49.538 06:57:54 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:49.538 06:57:54 -- target/dif.sh@36 -- # local sub_id=1 00:31:49.538 06:57:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:49.538 06:57:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:49.538 06:57:54 -- common/autotest_common.sh@10 -- # set +x 00:31:49.538 06:57:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:49.538 06:57:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:49.538 06:57:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:49.538 06:57:54 -- common/autotest_common.sh@10 -- # set +x 00:31:49.538 06:57:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:49.538 00:31:49.538 real 0m11.567s 00:31:49.538 user 0m20.401s 00:31:49.538 sys 0m1.342s 00:31:49.538 06:57:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:49.538 06:57:54 -- common/autotest_common.sh@10 -- # set +x 00:31:49.538 ************************************ 00:31:49.538 END TEST fio_dif_1_multi_subsystems 00:31:49.538 ************************************ 00:31:49.538 06:57:54 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:49.538 06:57:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:49.538 06:57:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:49.538 06:57:54 -- common/autotest_common.sh@10 -- # set +x 00:31:49.808 ************************************ 00:31:49.808 START TEST fio_dif_rand_params 00:31:49.808 ************************************ 00:31:49.809 06:57:54 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:31:49.809 06:57:54 -- target/dif.sh@100 -- # local NULL_DIF 00:31:49.809 06:57:54 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:49.809 06:57:54 -- target/dif.sh@103 -- # NULL_DIF=3 00:31:49.809 06:57:54 -- target/dif.sh@103 -- # bs=128k 00:31:49.809 06:57:54 -- target/dif.sh@103 -- # numjobs=3 00:31:49.809 06:57:54 -- target/dif.sh@103 -- # iodepth=3 00:31:49.809 06:57:54 -- target/dif.sh@103 -- # runtime=5 00:31:49.809 06:57:54 -- target/dif.sh@105 -- # create_subsystems 0 00:31:49.809 06:57:54 -- target/dif.sh@28 -- # local sub 00:31:49.809 06:57:54 -- target/dif.sh@30 -- # for sub in "$@" 00:31:49.809 06:57:54 -- target/dif.sh@31 -- # create_subsystem 0 00:31:49.809 06:57:54 -- target/dif.sh@18 -- # local sub_id=0 00:31:49.809 06:57:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:49.809 06:57:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:49.809 06:57:54 -- common/autotest_common.sh@10 -- # set +x 00:31:49.809 bdev_null0 00:31:49.809 06:57:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:49.809 06:57:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:49.809 06:57:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:49.809 06:57:54 -- common/autotest_common.sh@10 -- # set +x 00:31:49.809 06:57:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:49.809 06:57:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:49.809 06:57:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:49.809 06:57:54 -- common/autotest_common.sh@10 -- # set +x 00:31:49.809 06:57:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:49.809 06:57:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:49.809 06:57:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:49.809 06:57:54 -- common/autotest_common.sh@10 -- # set +x 00:31:49.809 [2024-04-17 06:57:54.250853] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:49.809 06:57:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:49.809 06:57:54 -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:49.809 06:57:54 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:49.809 06:57:54 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:49.809 06:57:54 -- nvmf/common.sh@521 -- # config=() 00:31:49.809 06:57:54 -- nvmf/common.sh@521 -- # local subsystem config 00:31:49.809 06:57:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:49.809 06:57:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:49.809 { 00:31:49.809 "params": { 00:31:49.809 "name": "Nvme$subsystem", 00:31:49.809 "trtype": "$TEST_TRANSPORT", 00:31:49.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:49.809 "adrfam": "ipv4", 00:31:49.809 "trsvcid": "$NVMF_PORT", 00:31:49.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:49.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:49.809 "hdgst": ${hdgst:-false}, 00:31:49.809 "ddgst": ${ddgst:-false} 00:31:49.809 }, 00:31:49.809 "method": "bdev_nvme_attach_controller" 00:31:49.809 } 00:31:49.809 EOF 00:31:49.809 )") 00:31:49.809 06:57:54 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:49.809 06:57:54 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:49.809 06:57:54 -- target/dif.sh@82 -- # gen_fio_conf 00:31:49.809 06:57:54 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:31:49.809 06:57:54 -- target/dif.sh@54 -- # local file 00:31:49.809 06:57:54 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:49.809 06:57:54 -- common/autotest_common.sh@1325 -- # local sanitizers 00:31:49.809 06:57:54 -- target/dif.sh@56 -- # cat 00:31:49.809 06:57:54 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:49.809 06:57:54 -- common/autotest_common.sh@1327 -- # shift 00:31:49.809 06:57:54 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:31:49.809 06:57:54 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:49.809 06:57:54 -- nvmf/common.sh@543 -- # cat 00:31:49.809 06:57:54 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:49.809 06:57:54 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:49.809 06:57:54 -- common/autotest_common.sh@1331 -- # grep libasan 00:31:49.809 06:57:54 -- target/dif.sh@72 -- # (( file <= files )) 00:31:49.809 06:57:54 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:49.809 06:57:54 -- nvmf/common.sh@545 -- # jq . 00:31:49.809 06:57:54 -- nvmf/common.sh@546 -- # IFS=, 00:31:49.809 06:57:54 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:31:49.809 "params": { 00:31:49.809 "name": "Nvme0", 00:31:49.809 "trtype": "tcp", 00:31:49.809 "traddr": "10.0.0.2", 00:31:49.809 "adrfam": "ipv4", 00:31:49.809 "trsvcid": "4420", 00:31:49.809 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:49.809 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:49.809 "hdgst": false, 00:31:49.809 "ddgst": false 00:31:49.809 }, 00:31:49.809 "method": "bdev_nvme_attach_controller" 00:31:49.809 }' 00:31:49.809 06:57:54 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:49.809 06:57:54 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:49.809 06:57:54 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:49.809 06:57:54 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:49.809 06:57:54 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:31:49.809 06:57:54 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:49.809 06:57:54 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:49.809 06:57:54 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:49.809 06:57:54 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:49.809 06:57:54 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:50.070 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:50.070 ... 00:31:50.070 fio-3.35 00:31:50.070 Starting 3 threads 00:31:50.070 EAL: No free 2048 kB hugepages reported on node 1 00:31:56.632 00:31:56.632 filename0: (groupid=0, jobs=1): err= 0: pid=136365: Wed Apr 17 06:58:00 2024 00:31:56.632 read: IOPS=193, BW=24.2MiB/s (25.4MB/s)(121MiB/5008msec) 00:31:56.632 slat (nsec): min=4710, max=58860, avg=18576.76, stdev=7435.38 00:31:56.633 clat (usec): min=5624, max=89077, avg=15478.42, stdev=13467.11 00:31:56.633 lat (usec): min=5637, max=89098, avg=15496.99, stdev=13467.35 00:31:56.633 clat percentiles (usec): 00:31:56.633 | 1.00th=[ 5932], 5.00th=[ 6587], 10.00th=[ 7701], 20.00th=[ 8848], 00:31:56.633 | 30.00th=[ 9503], 40.00th=[10028], 50.00th=[11338], 60.00th=[12256], 00:31:56.633 | 70.00th=[13173], 80.00th=[14222], 90.00th=[49546], 95.00th=[51643], 00:31:56.633 | 99.00th=[54264], 99.50th=[55837], 99.90th=[88605], 99.95th=[88605], 00:31:56.633 | 99.99th=[88605] 00:31:56.633 bw ( KiB/s): min=18176, max=35584, per=33.44%, avg=24729.60, stdev=4717.63, samples=10 00:31:56.633 iops : min= 142, max= 278, avg=193.20, stdev=36.86, samples=10 00:31:56.633 lat (msec) : 10=39.63%, 20=49.12%, 50=2.48%, 100=8.77% 00:31:56.633 cpu : usr=90.91%, sys=6.73%, ctx=361, majf=0, minf=78 00:31:56.633 IO depths : 1=2.4%, 2=97.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:56.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.633 issued rwts: total=969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.633 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:56.633 filename0: (groupid=0, jobs=1): err= 0: pid=136366: Wed Apr 17 06:58:00 2024 00:31:56.633 read: IOPS=196, BW=24.6MiB/s (25.8MB/s)(124MiB/5045msec) 00:31:56.633 slat (nsec): min=4278, max=54157, avg=14114.93, stdev=5002.92 00:31:56.633 clat (usec): min=4976, max=92115, avg=15193.41, stdev=12680.49 00:31:56.633 lat (usec): min=4989, max=92132, avg=15207.53, stdev=12680.44 00:31:56.633 clat percentiles (usec): 00:31:56.633 | 1.00th=[ 5800], 5.00th=[ 6259], 10.00th=[ 6980], 20.00th=[ 8848], 00:31:56.633 | 30.00th=[ 9634], 40.00th=[10290], 50.00th=[11600], 60.00th=[12780], 00:31:56.633 | 70.00th=[13698], 80.00th=[15008], 90.00th=[20579], 95.00th=[51643], 00:31:56.633 | 99.00th=[55837], 99.50th=[56886], 99.90th=[91751], 99.95th=[91751], 00:31:56.633 | 99.99th=[91751] 00:31:56.633 bw ( KiB/s): min=17920, max=33024, per=34.27%, avg=25344.00, stdev=4470.03, samples=10 00:31:56.633 iops : min= 140, max= 258, avg=198.00, stdev=34.92, samples=10 00:31:56.633 lat (msec) : 10=35.08%, 20=54.84%, 50=2.32%, 100=7.76% 00:31:56.633 cpu : usr=93.52%, sys=6.03%, ctx=13, majf=0, minf=99 00:31:56.633 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:56.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.633 issued rwts: total=992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.633 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:56.633 filename0: (groupid=0, jobs=1): err= 0: pid=136367: Wed Apr 17 06:58:00 2024 00:31:56.633 read: IOPS=189, BW=23.6MiB/s (24.8MB/s)(119MiB/5045msec) 00:31:56.633 slat (nsec): min=4583, max=50553, avg=14006.36, stdev=5069.11 00:31:56.633 clat (usec): min=5803, max=90965, avg=15802.39, stdev=13683.21 00:31:56.633 lat (usec): min=5814, max=90977, avg=15816.39, stdev=13683.08 00:31:56.633 clat percentiles (usec): 00:31:56.633 | 1.00th=[ 6128], 5.00th=[ 6456], 10.00th=[ 6980], 20.00th=[ 8848], 00:31:56.633 | 30.00th=[ 9765], 40.00th=[10552], 50.00th=[11731], 60.00th=[12911], 00:31:56.633 | 70.00th=[13960], 80.00th=[15008], 90.00th=[49021], 95.00th=[52691], 00:31:56.633 | 99.00th=[55313], 99.50th=[55837], 99.90th=[90702], 99.95th=[90702], 00:31:56.633 | 99.99th=[90702] 00:31:56.633 bw ( KiB/s): min=14848, max=29696, per=32.92%, avg=24350.60, stdev=4739.23, samples=10 00:31:56.633 iops : min= 116, max= 232, avg=190.20, stdev=37.02, samples=10 00:31:56.633 lat (msec) : 10=34.17%, 20=54.93%, 50=2.20%, 100=8.70% 00:31:56.633 cpu : usr=93.44%, sys=6.13%, ctx=9, majf=0, minf=130 00:31:56.633 IO depths : 1=2.0%, 2=98.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:56.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.633 issued rwts: total=954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.633 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:56.633 00:31:56.633 Run status group 0 (all jobs): 00:31:56.633 READ: bw=72.2MiB/s (75.7MB/s), 23.6MiB/s-24.6MiB/s (24.8MB/s-25.8MB/s), io=364MiB (382MB), run=5008-5045msec 00:31:56.633 06:58:00 -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:56.633 06:58:00 -- target/dif.sh@43 -- # local sub 00:31:56.633 06:58:00 -- target/dif.sh@45 -- # for sub in "$@" 00:31:56.633 06:58:00 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:56.633 06:58:00 -- target/dif.sh@36 -- # local sub_id=0 00:31:56.633 06:58:00 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:56.633 06:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.633 06:58:00 -- common/autotest_common.sh@10 -- # set +x 00:31:56.633 06:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.633 06:58:00 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:56.633 06:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.633 06:58:00 -- common/autotest_common.sh@10 -- # set +x 00:31:56.633 06:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.633 06:58:00 -- target/dif.sh@109 -- # NULL_DIF=2 00:31:56.633 06:58:00 -- target/dif.sh@109 -- # bs=4k 00:31:56.633 06:58:00 -- target/dif.sh@109 -- # numjobs=8 00:31:56.633 06:58:00 -- target/dif.sh@109 -- # iodepth=16 00:31:56.633 06:58:00 -- target/dif.sh@109 -- # runtime= 00:31:56.633 06:58:00 -- target/dif.sh@109 -- # files=2 00:31:56.633 06:58:00 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:56.633 06:58:00 -- target/dif.sh@28 -- # local sub 00:31:56.633 06:58:00 -- target/dif.sh@30 -- # for sub in "$@" 00:31:56.633 06:58:00 -- target/dif.sh@31 -- # create_subsystem 0 00:31:56.633 06:58:00 -- target/dif.sh@18 -- # local sub_id=0 00:31:56.633 06:58:00 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:56.633 06:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.633 06:58:00 -- common/autotest_common.sh@10 -- # set +x 00:31:56.633 bdev_null0 00:31:56.633 06:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.633 06:58:00 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:56.633 06:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.633 06:58:00 -- common/autotest_common.sh@10 -- # set +x 00:31:56.633 06:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.633 06:58:00 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:56.633 06:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.633 06:58:00 -- common/autotest_common.sh@10 -- # set +x 00:31:56.633 06:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.633 06:58:00 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:56.633 06:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.633 06:58:00 -- common/autotest_common.sh@10 -- # set +x 00:31:56.633 [2024-04-17 06:58:00.519669] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:56.633 06:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.633 06:58:00 -- target/dif.sh@30 -- # for sub in "$@" 00:31:56.633 06:58:00 -- target/dif.sh@31 -- # create_subsystem 1 00:31:56.633 06:58:00 -- target/dif.sh@18 -- # local sub_id=1 00:31:56.633 06:58:00 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:56.633 06:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.633 06:58:00 -- common/autotest_common.sh@10 -- # set +x 00:31:56.633 bdev_null1 00:31:56.633 06:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.633 06:58:00 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:56.633 06:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.633 06:58:00 -- common/autotest_common.sh@10 -- # set +x 00:31:56.633 06:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.633 06:58:00 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:56.633 06:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.633 06:58:00 -- common/autotest_common.sh@10 -- # set +x 00:31:56.633 06:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.633 06:58:00 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:56.633 06:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.633 06:58:00 -- common/autotest_common.sh@10 -- # set +x 00:31:56.633 06:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.633 06:58:00 -- target/dif.sh@30 -- # for sub in "$@" 00:31:56.633 06:58:00 -- target/dif.sh@31 -- # create_subsystem 2 00:31:56.633 06:58:00 -- target/dif.sh@18 -- # local sub_id=2 00:31:56.633 06:58:00 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:56.633 06:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.633 06:58:00 -- common/autotest_common.sh@10 -- # set +x 00:31:56.633 bdev_null2 00:31:56.633 06:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.633 06:58:00 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:56.633 06:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.633 06:58:00 -- common/autotest_common.sh@10 -- # set +x 00:31:56.633 06:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.633 06:58:00 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:56.633 06:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.633 06:58:00 -- common/autotest_common.sh@10 -- # set +x 00:31:56.633 06:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.633 06:58:00 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:56.633 06:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.633 06:58:00 -- common/autotest_common.sh@10 -- # set +x 00:31:56.633 06:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.633 06:58:00 -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:56.633 06:58:00 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:56.633 06:58:00 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:56.633 06:58:00 -- nvmf/common.sh@521 -- # config=() 00:31:56.633 06:58:00 -- nvmf/common.sh@521 -- # local subsystem config 00:31:56.633 06:58:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:56.633 06:58:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:56.633 { 00:31:56.633 "params": { 00:31:56.633 "name": "Nvme$subsystem", 00:31:56.633 "trtype": "$TEST_TRANSPORT", 00:31:56.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:56.633 "adrfam": "ipv4", 00:31:56.633 "trsvcid": "$NVMF_PORT", 00:31:56.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:56.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:56.634 "hdgst": ${hdgst:-false}, 00:31:56.634 "ddgst": ${ddgst:-false} 00:31:56.634 }, 00:31:56.634 "method": "bdev_nvme_attach_controller" 00:31:56.634 } 00:31:56.634 EOF 00:31:56.634 )") 00:31:56.634 06:58:00 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:56.634 06:58:00 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:56.634 06:58:00 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:31:56.634 06:58:00 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:56.634 06:58:00 -- common/autotest_common.sh@1325 -- # local sanitizers 00:31:56.634 06:58:00 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:56.634 06:58:00 -- target/dif.sh@82 -- # gen_fio_conf 00:31:56.634 06:58:00 -- common/autotest_common.sh@1327 -- # shift 00:31:56.634 06:58:00 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:31:56.634 06:58:00 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:56.634 06:58:00 -- target/dif.sh@54 -- # local file 00:31:56.634 06:58:00 -- target/dif.sh@56 -- # cat 00:31:56.634 06:58:00 -- nvmf/common.sh@543 -- # cat 00:31:56.634 06:58:00 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:56.634 06:58:00 -- common/autotest_common.sh@1331 -- # grep libasan 00:31:56.634 06:58:00 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:56.634 06:58:00 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:56.634 06:58:00 -- target/dif.sh@72 -- # (( file <= files )) 00:31:56.634 06:58:00 -- target/dif.sh@73 -- # cat 00:31:56.634 06:58:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:56.634 06:58:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:56.634 { 00:31:56.634 "params": { 00:31:56.634 "name": "Nvme$subsystem", 00:31:56.634 "trtype": "$TEST_TRANSPORT", 00:31:56.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:56.634 "adrfam": "ipv4", 00:31:56.634 "trsvcid": "$NVMF_PORT", 00:31:56.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:56.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:56.634 "hdgst": ${hdgst:-false}, 00:31:56.634 "ddgst": ${ddgst:-false} 00:31:56.634 }, 00:31:56.634 "method": "bdev_nvme_attach_controller" 00:31:56.634 } 00:31:56.634 EOF 00:31:56.634 )") 00:31:56.634 06:58:00 -- nvmf/common.sh@543 -- # cat 00:31:56.634 06:58:00 -- target/dif.sh@72 -- # (( file++ )) 00:31:56.634 06:58:00 -- target/dif.sh@72 -- # (( file <= files )) 00:31:56.634 06:58:00 -- target/dif.sh@73 -- # cat 00:31:56.634 06:58:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:56.634 06:58:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:56.634 { 00:31:56.634 "params": { 00:31:56.634 "name": "Nvme$subsystem", 00:31:56.634 "trtype": "$TEST_TRANSPORT", 00:31:56.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:56.634 "adrfam": "ipv4", 00:31:56.634 "trsvcid": "$NVMF_PORT", 00:31:56.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:56.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:56.634 "hdgst": ${hdgst:-false}, 00:31:56.634 "ddgst": ${ddgst:-false} 00:31:56.634 }, 00:31:56.634 "method": "bdev_nvme_attach_controller" 00:31:56.634 } 00:31:56.634 EOF 00:31:56.634 )") 00:31:56.634 06:58:00 -- target/dif.sh@72 -- # (( file++ )) 00:31:56.634 06:58:00 -- target/dif.sh@72 -- # (( file <= files )) 00:31:56.634 06:58:00 -- nvmf/common.sh@543 -- # cat 00:31:56.634 06:58:00 -- nvmf/common.sh@545 -- # jq . 00:31:56.634 06:58:00 -- nvmf/common.sh@546 -- # IFS=, 00:31:56.634 06:58:00 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:31:56.634 "params": { 00:31:56.634 "name": "Nvme0", 00:31:56.634 "trtype": "tcp", 00:31:56.634 "traddr": "10.0.0.2", 00:31:56.634 "adrfam": "ipv4", 00:31:56.634 "trsvcid": "4420", 00:31:56.634 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:56.634 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:56.634 "hdgst": false, 00:31:56.634 "ddgst": false 00:31:56.634 }, 00:31:56.634 "method": "bdev_nvme_attach_controller" 00:31:56.634 },{ 00:31:56.634 "params": { 00:31:56.634 "name": "Nvme1", 00:31:56.634 "trtype": "tcp", 00:31:56.634 "traddr": "10.0.0.2", 00:31:56.634 "adrfam": "ipv4", 00:31:56.634 "trsvcid": "4420", 00:31:56.634 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:56.634 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:56.634 "hdgst": false, 00:31:56.634 "ddgst": false 00:31:56.634 }, 00:31:56.634 "method": "bdev_nvme_attach_controller" 00:31:56.634 },{ 00:31:56.634 "params": { 00:31:56.634 "name": "Nvme2", 00:31:56.634 "trtype": "tcp", 00:31:56.634 "traddr": "10.0.0.2", 00:31:56.634 "adrfam": "ipv4", 00:31:56.634 "trsvcid": "4420", 00:31:56.634 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:56.634 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:56.634 "hdgst": false, 00:31:56.634 "ddgst": false 00:31:56.634 }, 00:31:56.634 "method": "bdev_nvme_attach_controller" 00:31:56.634 }' 00:31:56.634 06:58:00 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:56.634 06:58:00 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:56.634 06:58:00 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:56.634 06:58:00 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:56.634 06:58:00 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:31:56.634 06:58:00 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:56.634 06:58:00 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:56.634 06:58:00 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:56.634 06:58:00 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:56.634 06:58:00 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:56.634 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:56.634 ... 00:31:56.634 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:56.634 ... 00:31:56.634 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:56.634 ... 00:31:56.634 fio-3.35 00:31:56.634 Starting 24 threads 00:31:56.634 EAL: No free 2048 kB hugepages reported on node 1 00:32:08.855 00:32:08.855 filename0: (groupid=0, jobs=1): err= 0: pid=137233: Wed Apr 17 06:58:12 2024 00:32:08.855 read: IOPS=224, BW=897KiB/s (918kB/s)(8992KiB/10026msec) 00:32:08.855 slat (nsec): min=5154, max=52391, avg=18832.48, stdev=9321.21 00:32:08.855 clat (msec): min=16, max=280, avg=71.20, stdev=62.12 00:32:08.855 lat (msec): min=16, max=280, avg=71.22, stdev=62.11 00:32:08.855 clat percentiles (msec): 00:32:08.856 | 1.00th=[ 22], 5.00th=[ 29], 10.00th=[ 34], 20.00th=[ 35], 00:32:08.856 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:32:08.856 | 70.00th=[ 37], 80.00th=[ 153], 90.00th=[ 178], 95.00th=[ 194], 00:32:08.856 | 99.00th=[ 236], 99.50th=[ 243], 99.90th=[ 279], 99.95th=[ 279], 00:32:08.856 | 99.99th=[ 279] 00:32:08.856 bw ( KiB/s): min= 256, max= 1920, per=4.62%, avg=892.80, stdev=685.79, samples=20 00:32:08.856 iops : min= 64, max= 480, avg=223.20, stdev=171.45, samples=20 00:32:08.856 lat (msec) : 20=0.98%, 50=70.46%, 100=1.96%, 250=26.33%, 500=0.27% 00:32:08.856 cpu : usr=98.02%, sys=1.61%, ctx=18, majf=0, minf=35 00:32:08.856 IO depths : 1=4.7%, 2=10.0%, 4=21.9%, 8=55.6%, 16=7.8%, 32=0.0%, >=64=0.0% 00:32:08.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.856 complete : 0=0.0%, 4=93.2%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.856 issued rwts: total=2248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:08.856 filename0: (groupid=0, jobs=1): err= 0: pid=137234: Wed Apr 17 06:58:12 2024 00:32:08.856 read: IOPS=202, BW=811KiB/s (831kB/s)(8128KiB/10018msec) 00:32:08.856 slat (usec): min=7, max=144, avg=15.17, stdev= 9.41 00:32:08.856 clat (msec): min=20, max=342, avg=78.74, stdev=79.91 00:32:08.856 lat (msec): min=20, max=342, avg=78.76, stdev=79.91 00:32:08.856 clat percentiles (msec): 00:32:08.856 | 1.00th=[ 22], 5.00th=[ 35], 10.00th=[ 35], 20.00th=[ 35], 00:32:08.856 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:32:08.856 | 70.00th=[ 36], 80.00th=[ 157], 90.00th=[ 243], 95.00th=[ 251], 00:32:08.856 | 99.00th=[ 271], 99.50th=[ 288], 99.90th=[ 309], 99.95th=[ 342], 00:32:08.856 | 99.99th=[ 342] 00:32:08.856 bw ( KiB/s): min= 256, max= 1920, per=4.17%, avg=806.40, stdev=728.22, samples=20 00:32:08.856 iops : min= 64, max= 480, avg=201.60, stdev=182.06, samples=20 00:32:08.856 lat (msec) : 50=74.90%, 100=0.69%, 250=18.80%, 500=5.61% 00:32:08.856 cpu : usr=97.72%, sys=1.75%, ctx=19, majf=0, minf=34 00:32:08.856 IO depths : 1=4.9%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.6%, 32=0.0%, >=64=0.0% 00:32:08.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.856 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.856 issued rwts: total=2032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:08.856 filename0: (groupid=0, jobs=1): err= 0: pid=137235: Wed Apr 17 06:58:12 2024 00:32:08.856 read: IOPS=198, BW=792KiB/s (812kB/s)(7936KiB/10014msec) 00:32:08.856 slat (usec): min=8, max=114, avg=31.45, stdev=17.40 00:32:08.856 clat (msec): min=33, max=294, avg=80.41, stdev=84.66 00:32:08.856 lat (msec): min=33, max=294, avg=80.45, stdev=84.66 00:32:08.856 clat percentiles (msec): 00:32:08.856 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 35], 00:32:08.856 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:32:08.856 | 70.00th=[ 36], 80.00th=[ 192], 90.00th=[ 245], 95.00th=[ 257], 00:32:08.856 | 99.00th=[ 275], 99.50th=[ 292], 99.90th=[ 296], 99.95th=[ 296], 00:32:08.856 | 99.99th=[ 296] 00:32:08.856 bw ( KiB/s): min= 240, max= 1920, per=4.10%, avg=792.80, stdev=739.75, samples=20 00:32:08.856 iops : min= 60, max= 480, avg=198.20, stdev=184.94, samples=20 00:32:08.856 lat (msec) : 50=76.61%, 250=15.32%, 500=8.06% 00:32:08.856 cpu : usr=97.83%, sys=1.67%, ctx=31, majf=0, minf=16 00:32:08.856 IO depths : 1=2.9%, 2=9.2%, 4=25.0%, 8=53.3%, 16=9.6%, 32=0.0%, >=64=0.0% 00:32:08.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.856 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.856 issued rwts: total=1984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:08.856 filename0: (groupid=0, jobs=1): err= 0: pid=137236: Wed Apr 17 06:58:12 2024 00:32:08.856 read: IOPS=196, BW=787KiB/s (806kB/s)(7872KiB/10004msec) 00:32:08.856 slat (nsec): min=3769, max=97661, avg=26517.48, stdev=10460.47 00:32:08.856 clat (msec): min=20, max=396, avg=81.11, stdev=88.11 00:32:08.856 lat (msec): min=20, max=396, avg=81.13, stdev=88.12 00:32:08.856 clat percentiles (msec): 00:32:08.856 | 1.00th=[ 21], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 35], 00:32:08.856 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:32:08.856 | 70.00th=[ 36], 80.00th=[ 197], 90.00th=[ 249], 95.00th=[ 257], 00:32:08.856 | 99.00th=[ 321], 99.50th=[ 321], 99.90th=[ 397], 99.95th=[ 397], 00:32:08.856 | 99.99th=[ 397] 00:32:08.856 bw ( KiB/s): min= 128, max= 1920, per=3.76%, avg=727.58, stdev=725.09, samples=19 00:32:08.856 iops : min= 32, max= 480, avg=181.89, stdev=181.27, samples=19 00:32:08.856 lat (msec) : 50=77.13%, 100=0.91%, 250=14.13%, 500=7.83% 00:32:08.856 cpu : usr=95.69%, sys=2.70%, ctx=298, majf=0, minf=14 00:32:08.856 IO depths : 1=4.5%, 2=10.4%, 4=23.6%, 8=53.6%, 16=8.0%, 32=0.0%, >=64=0.0% 00:32:08.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.856 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.856 issued rwts: total=1968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:08.856 filename0: (groupid=0, jobs=1): err= 0: pid=137237: Wed Apr 17 06:58:12 2024 00:32:08.856 read: IOPS=198, BW=792KiB/s (811kB/s)(7936KiB/10015msec) 00:32:08.856 slat (usec): min=10, max=201, avg=30.58, stdev=11.10 00:32:08.856 clat (msec): min=25, max=315, avg=80.51, stdev=85.07 00:32:08.856 lat (msec): min=26, max=315, avg=80.54, stdev=85.07 00:32:08.856 clat percentiles (msec): 00:32:08.856 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 35], 00:32:08.856 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:32:08.856 | 70.00th=[ 36], 80.00th=[ 194], 90.00th=[ 247], 95.00th=[ 255], 00:32:08.856 | 99.00th=[ 292], 99.50th=[ 292], 99.90th=[ 317], 99.95th=[ 317], 00:32:08.856 | 99.99th=[ 317] 00:32:08.856 bw ( KiB/s): min= 128, max= 1920, per=4.07%, avg=787.20, stdev=740.88, samples=20 00:32:08.856 iops : min= 32, max= 480, avg=196.80, stdev=185.22, samples=20 00:32:08.856 lat (msec) : 50=76.61%, 250=15.22%, 500=8.17% 00:32:08.856 cpu : usr=97.07%, sys=1.88%, ctx=124, majf=0, minf=22 00:32:08.856 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:32:08.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.856 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.856 issued rwts: total=1984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:08.856 filename0: (groupid=0, jobs=1): err= 0: pid=137238: Wed Apr 17 06:58:12 2024 00:32:08.856 read: IOPS=202, BW=810KiB/s (829kB/s)(8104KiB/10009msec) 00:32:08.856 slat (usec): min=8, max=844, avg=66.67, stdev=39.96 00:32:08.856 clat (msec): min=8, max=383, avg=78.69, stdev=87.82 00:32:08.856 lat (msec): min=8, max=383, avg=78.76, stdev=87.82 00:32:08.856 clat percentiles (msec): 00:32:08.856 | 1.00th=[ 16], 5.00th=[ 24], 10.00th=[ 29], 20.00th=[ 35], 00:32:08.856 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:32:08.856 | 70.00th=[ 36], 80.00th=[ 194], 90.00th=[ 249], 95.00th=[ 257], 00:32:08.856 | 99.00th=[ 313], 99.50th=[ 317], 99.90th=[ 359], 99.95th=[ 384], 00:32:08.856 | 99.99th=[ 384] 00:32:08.856 bw ( KiB/s): min= 128, max= 1968, per=3.84%, avg=741.05, stdev=747.83, samples=19 00:32:08.856 iops : min= 32, max= 492, avg=185.26, stdev=186.96, samples=19 00:32:08.856 lat (msec) : 10=0.35%, 20=1.23%, 50=75.72%, 100=1.28%, 250=13.67% 00:32:08.856 lat (msec) : 500=7.75% 00:32:08.856 cpu : usr=96.19%, sys=2.09%, ctx=71, majf=0, minf=20 00:32:08.856 IO depths : 1=1.1%, 2=3.0%, 4=8.3%, 8=72.7%, 16=14.9%, 32=0.0%, >=64=0.0% 00:32:08.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.856 complete : 0=0.0%, 4=90.6%, 8=7.0%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.856 issued rwts: total=2026,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:08.856 filename0: (groupid=0, jobs=1): err= 0: pid=137239: Wed Apr 17 06:58:12 2024 00:32:08.856 read: IOPS=198, BW=793KiB/s (812kB/s)(7936KiB/10007msec) 00:32:08.856 slat (usec): min=7, max=108, avg=37.07, stdev=18.78 00:32:08.856 clat (msec): min=20, max=383, avg=80.38, stdev=87.78 00:32:08.856 lat (msec): min=20, max=383, avg=80.42, stdev=87.79 00:32:08.856 clat percentiles (msec): 00:32:08.856 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 35], 00:32:08.856 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:32:08.856 | 70.00th=[ 35], 80.00th=[ 192], 90.00th=[ 247], 95.00th=[ 257], 00:32:08.856 | 99.00th=[ 326], 99.50th=[ 342], 99.90th=[ 384], 99.95th=[ 384], 00:32:08.856 | 99.99th=[ 384] 00:32:08.856 bw ( KiB/s): min= 128, max= 1936, per=3.80%, avg=734.32, stdev=736.37, samples=19 00:32:08.856 iops : min= 32, max= 484, avg=183.58, stdev=184.09, samples=19 00:32:08.856 lat (msec) : 50=76.92%, 100=0.50%, 250=15.02%, 500=7.56% 00:32:08.856 cpu : usr=97.96%, sys=1.48%, ctx=41, majf=0, minf=18 00:32:08.856 IO depths : 1=4.4%, 2=10.4%, 4=24.2%, 8=52.9%, 16=8.1%, 32=0.0%, >=64=0.0% 00:32:08.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.856 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.856 issued rwts: total=1984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:08.856 filename0: (groupid=0, jobs=1): err= 0: pid=137240: Wed Apr 17 06:58:12 2024 00:32:08.856 read: IOPS=198, BW=792KiB/s (811kB/s)(7936KiB/10018msec) 00:32:08.856 slat (usec): min=8, max=121, avg=67.83, stdev=19.59 00:32:08.856 clat (msec): min=21, max=338, avg=80.19, stdev=86.02 00:32:08.856 lat (msec): min=21, max=338, avg=80.26, stdev=86.02 00:32:08.856 clat percentiles (msec): 00:32:08.856 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:32:08.856 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:32:08.856 | 70.00th=[ 35], 80.00th=[ 192], 90.00th=[ 247], 95.00th=[ 257], 00:32:08.856 | 99.00th=[ 279], 99.50th=[ 279], 99.90th=[ 338], 99.95th=[ 338], 00:32:08.856 | 99.99th=[ 338] 00:32:08.856 bw ( KiB/s): min= 256, max= 1920, per=4.07%, avg=787.20, stdev=739.72, samples=20 00:32:08.856 iops : min= 64, max= 480, avg=196.80, stdev=184.93, samples=20 00:32:08.856 lat (msec) : 50=76.61%, 100=0.81%, 250=14.87%, 500=7.71% 00:32:08.856 cpu : usr=98.18%, sys=1.38%, ctx=17, majf=0, minf=12 00:32:08.856 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:32:08.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.857 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.857 issued rwts: total=1984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:08.857 filename1: (groupid=0, jobs=1): err= 0: pid=137241: Wed Apr 17 06:58:12 2024 00:32:08.857 read: IOPS=196, BW=787KiB/s (806kB/s)(7872KiB/10004msec) 00:32:08.857 slat (usec): min=6, max=221, avg=29.51, stdev=14.22 00:32:08.857 clat (msec): min=25, max=323, avg=81.05, stdev=87.66 00:32:08.857 lat (msec): min=25, max=323, avg=81.08, stdev=87.65 00:32:08.857 clat percentiles (msec): 00:32:08.857 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 35], 00:32:08.857 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:32:08.857 | 70.00th=[ 35], 80.00th=[ 194], 90.00th=[ 251], 95.00th=[ 257], 00:32:08.857 | 99.00th=[ 317], 99.50th=[ 321], 99.90th=[ 326], 99.95th=[ 326], 00:32:08.857 | 99.99th=[ 326] 00:32:08.857 bw ( KiB/s): min= 128, max= 1920, per=3.76%, avg=727.58, stdev=726.54, samples=19 00:32:08.857 iops : min= 32, max= 480, avg=181.89, stdev=181.63, samples=19 00:32:08.857 lat (msec) : 50=77.24%, 100=0.10%, 250=12.50%, 500=10.16% 00:32:08.857 cpu : usr=95.87%, sys=2.45%, ctx=45, majf=0, minf=19 00:32:08.857 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:32:08.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.857 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.857 issued rwts: total=1968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:08.857 filename1: (groupid=0, jobs=1): err= 0: pid=137242: Wed Apr 17 06:58:12 2024 00:32:08.857 read: IOPS=198, BW=792KiB/s (811kB/s)(7936KiB/10015msec) 00:32:08.857 slat (nsec): min=9617, max=74151, avg=34430.01, stdev=10268.95 00:32:08.857 clat (msec): min=27, max=333, avg=80.45, stdev=85.41 00:32:08.857 lat (msec): min=27, max=333, avg=80.48, stdev=85.40 00:32:08.857 clat percentiles (msec): 00:32:08.857 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 35], 00:32:08.857 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:32:08.857 | 70.00th=[ 35], 80.00th=[ 190], 90.00th=[ 247], 95.00th=[ 257], 00:32:08.857 | 99.00th=[ 292], 99.50th=[ 300], 99.90th=[ 334], 99.95th=[ 334], 00:32:08.857 | 99.99th=[ 334] 00:32:08.857 bw ( KiB/s): min= 128, max= 1920, per=4.07%, avg=787.20, stdev=740.88, samples=20 00:32:08.857 iops : min= 32, max= 480, avg=196.80, stdev=185.22, samples=20 00:32:08.857 lat (msec) : 50=76.61%, 250=13.91%, 500=9.48% 00:32:08.857 cpu : usr=97.99%, sys=1.51%, ctx=37, majf=0, minf=23 00:32:08.857 IO depths : 1=5.5%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:32:08.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.857 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.857 issued rwts: total=1984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:08.857 filename1: (groupid=0, jobs=1): err= 0: pid=137243: Wed Apr 17 06:58:12 2024 00:32:08.857 read: IOPS=198, BW=792KiB/s (811kB/s)(7928KiB/10009msec) 00:32:08.857 slat (nsec): min=8539, max=72237, avg=22798.09, stdev=9022.94 00:32:08.857 clat (msec): min=9, max=314, avg=80.60, stdev=87.55 00:32:08.857 lat (msec): min=9, max=314, avg=80.63, stdev=87.55 00:32:08.857 clat percentiles (msec): 00:32:08.857 | 1.00th=[ 27], 5.00th=[ 35], 10.00th=[ 35], 20.00th=[ 35], 00:32:08.857 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:32:08.857 | 70.00th=[ 36], 80.00th=[ 197], 90.00th=[ 249], 95.00th=[ 257], 00:32:08.857 | 99.00th=[ 279], 99.50th=[ 313], 99.90th=[ 313], 99.95th=[ 313], 00:32:08.857 | 99.99th=[ 313] 00:32:08.857 bw ( KiB/s): min= 128, max= 1904, per=3.76%, avg=727.58, stdev=725.12, samples=19 00:32:08.857 iops : min= 32, max= 476, avg=181.89, stdev=181.28, samples=19 00:32:08.857 lat (msec) : 10=0.71%, 20=0.10%, 50=76.49%, 100=0.91%, 250=13.72% 00:32:08.857 lat (msec) : 500=8.07% 00:32:08.857 cpu : usr=96.12%, sys=2.40%, ctx=98, majf=0, minf=21 00:32:08.857 IO depths : 1=2.3%, 2=8.5%, 4=25.0%, 8=54.0%, 16=10.2%, 32=0.0%, >=64=0.0% 00:32:08.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.857 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.857 issued rwts: total=1982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:08.857 filename1: (groupid=0, jobs=1): err= 0: pid=137244: Wed Apr 17 06:58:12 2024 00:32:08.857 read: IOPS=209, BW=837KiB/s (857kB/s)(8384KiB/10022msec) 00:32:08.857 slat (nsec): min=4760, max=53759, avg=24253.57, stdev=10238.96 00:32:08.857 clat (msec): min=14, max=287, avg=76.28, stdev=75.09 00:32:08.857 lat (msec): min=14, max=287, avg=76.30, stdev=75.08 00:32:08.857 clat percentiles (msec): 00:32:08.857 | 1.00th=[ 30], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 35], 00:32:08.857 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:32:08.857 | 70.00th=[ 36], 80.00th=[ 153], 90.00th=[ 209], 95.00th=[ 245], 00:32:08.857 | 99.00th=[ 271], 99.50th=[ 288], 99.90th=[ 288], 99.95th=[ 288], 00:32:08.857 | 99.99th=[ 288] 00:32:08.857 bw ( KiB/s): min= 256, max= 1920, per=4.04%, avg=781.47, stdev=693.11, samples=19 00:32:08.857 iops : min= 64, max= 480, avg=195.37, stdev=173.28, samples=19 00:32:08.857 lat (msec) : 20=0.76%, 50=73.28%, 100=1.19%, 250=20.94%, 500=3.82% 00:32:08.857 cpu : usr=98.09%, sys=1.54%, ctx=26, majf=0, minf=20 00:32:08.857 IO depths : 1=6.1%, 2=12.3%, 4=24.6%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:08.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.857 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.857 issued rwts: total=2096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:08.857 filename1: (groupid=0, jobs=1): err= 0: pid=137245: Wed Apr 17 06:58:12 2024 00:32:08.857 read: IOPS=196, BW=786KiB/s (805kB/s)(7872KiB/10014msec) 00:32:08.857 slat (usec): min=8, max=429, avg=20.69, stdev=28.51 00:32:08.857 clat (msec): min=27, max=333, avg=81.24, stdev=87.39 00:32:08.857 lat (msec): min=27, max=333, avg=81.26, stdev=87.40 00:32:08.857 clat percentiles (msec): 00:32:08.857 | 1.00th=[ 34], 5.00th=[ 35], 10.00th=[ 35], 20.00th=[ 35], 00:32:08.857 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:32:08.857 | 70.00th=[ 36], 80.00th=[ 197], 90.00th=[ 251], 95.00th=[ 257], 00:32:08.857 | 99.00th=[ 296], 99.50th=[ 300], 99.90th=[ 334], 99.95th=[ 334], 00:32:08.857 | 99.99th=[ 334] 00:32:08.857 bw ( KiB/s): min= 128, max= 1920, per=4.04%, avg=780.80, stdev=745.09, samples=20 00:32:08.857 iops : min= 32, max= 480, avg=195.20, stdev=186.27, samples=20 00:32:08.857 lat (msec) : 50=77.24%, 250=12.40%, 500=10.37% 00:32:08.857 cpu : usr=95.83%, sys=2.54%, ctx=138, majf=0, minf=20 00:32:08.857 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:32:08.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.857 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.857 issued rwts: total=1968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:08.857 filename1: (groupid=0, jobs=1): err= 0: pid=137246: Wed Apr 17 06:58:12 2024 00:32:08.857 read: IOPS=198, BW=792KiB/s (811kB/s)(7928KiB/10010msec) 00:32:08.857 slat (usec): min=8, max=107, avg=26.21, stdev=14.36 00:32:08.857 clat (msec): min=9, max=314, avg=80.56, stdev=87.56 00:32:08.857 lat (msec): min=9, max=315, avg=80.58, stdev=87.57 00:32:08.857 clat percentiles (msec): 00:32:08.857 | 1.00th=[ 27], 5.00th=[ 35], 10.00th=[ 35], 20.00th=[ 35], 00:32:08.857 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:32:08.857 | 70.00th=[ 36], 80.00th=[ 197], 90.00th=[ 249], 95.00th=[ 257], 00:32:08.857 | 99.00th=[ 279], 99.50th=[ 317], 99.90th=[ 317], 99.95th=[ 317], 00:32:08.857 | 99.99th=[ 317] 00:32:08.857 bw ( KiB/s): min= 128, max= 1920, per=3.76%, avg=727.58, stdev=725.40, samples=19 00:32:08.857 iops : min= 32, max= 480, avg=181.89, stdev=181.35, samples=19 00:32:08.857 lat (msec) : 10=0.71%, 20=0.10%, 50=76.49%, 100=0.91%, 250=13.72% 00:32:08.857 lat (msec) : 500=8.07% 00:32:08.857 cpu : usr=93.83%, sys=3.35%, ctx=242, majf=0, minf=15 00:32:08.857 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:32:08.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.857 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.857 issued rwts: total=1982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:08.857 filename1: (groupid=0, jobs=1): err= 0: pid=137247: Wed Apr 17 06:58:12 2024 00:32:08.857 read: IOPS=220, BW=880KiB/s (901kB/s)(8824KiB/10024msec) 00:32:08.857 slat (nsec): min=6023, max=48370, avg=20134.71, stdev=8067.42 00:32:08.857 clat (msec): min=16, max=227, avg=72.52, stdev=61.12 00:32:08.857 lat (msec): min=16, max=227, avg=72.54, stdev=61.12 00:32:08.857 clat percentiles (msec): 00:32:08.857 | 1.00th=[ 26], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 35], 00:32:08.857 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:32:08.857 | 70.00th=[ 45], 80.00th=[ 150], 90.00th=[ 178], 95.00th=[ 205], 00:32:08.857 | 99.00th=[ 209], 99.50th=[ 209], 99.90th=[ 228], 99.95th=[ 228], 00:32:08.857 | 99.99th=[ 228] 00:32:08.857 bw ( KiB/s): min= 256, max= 1920, per=4.29%, avg=828.63, stdev=658.98, samples=19 00:32:08.857 iops : min= 64, max= 480, avg=207.16, stdev=164.74, samples=19 00:32:08.857 lat (msec) : 20=0.73%, 50=69.54%, 100=0.91%, 250=28.83% 00:32:08.857 cpu : usr=97.92%, sys=1.59%, ctx=42, majf=0, minf=20 00:32:08.857 IO depths : 1=2.8%, 2=8.9%, 4=24.6%, 8=54.1%, 16=9.7%, 32=0.0%, >=64=0.0% 00:32:08.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.857 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.857 issued rwts: total=2206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:08.857 filename1: (groupid=0, jobs=1): err= 0: pid=137248: Wed Apr 17 06:58:12 2024 00:32:08.857 read: IOPS=198, BW=792KiB/s (811kB/s)(7936KiB/10015msec) 00:32:08.857 slat (usec): min=8, max=111, avg=35.30, stdev=16.80 00:32:08.857 clat (msec): min=27, max=363, avg=80.47, stdev=85.79 00:32:08.857 lat (msec): min=27, max=363, avg=80.50, stdev=85.80 00:32:08.857 clat percentiles (msec): 00:32:08.857 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 35], 00:32:08.858 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:32:08.858 | 70.00th=[ 36], 80.00th=[ 192], 90.00th=[ 249], 95.00th=[ 257], 00:32:08.858 | 99.00th=[ 305], 99.50th=[ 338], 99.90th=[ 363], 99.95th=[ 363], 00:32:08.858 | 99.99th=[ 363] 00:32:08.858 bw ( KiB/s): min= 144, max= 1920, per=4.07%, avg=787.20, stdev=740.61, samples=20 00:32:08.858 iops : min= 36, max= 480, avg=196.80, stdev=185.15, samples=20 00:32:08.858 lat (msec) : 50=76.61%, 250=15.12%, 500=8.27% 00:32:08.858 cpu : usr=97.97%, sys=1.59%, ctx=16, majf=0, minf=18 00:32:08.858 IO depths : 1=5.5%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:32:08.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.858 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.858 issued rwts: total=1984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.858 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:08.858 filename2: (groupid=0, jobs=1): err= 0: pid=137249: Wed Apr 17 06:58:12 2024 00:32:08.858 read: IOPS=198, BW=795KiB/s (814kB/s)(7960KiB/10010msec) 00:32:08.858 slat (usec): min=6, max=137, avg=25.70, stdev=10.99 00:32:08.858 clat (msec): min=9, max=314, avg=80.26, stdev=86.90 00:32:08.858 lat (msec): min=9, max=314, avg=80.29, stdev=86.90 00:32:08.858 clat percentiles (msec): 00:32:08.858 | 1.00th=[ 17], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 35], 00:32:08.858 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:32:08.858 | 70.00th=[ 36], 80.00th=[ 199], 90.00th=[ 249], 95.00th=[ 257], 00:32:08.858 | 99.00th=[ 292], 99.50th=[ 313], 99.90th=[ 313], 99.95th=[ 313], 00:32:08.858 | 99.99th=[ 313] 00:32:08.858 bw ( KiB/s): min= 128, max= 1920, per=3.78%, avg=730.11, stdev=730.52, samples=19 00:32:08.858 iops : min= 32, max= 480, avg=182.53, stdev=182.63, samples=19 00:32:08.858 lat (msec) : 10=0.80%, 20=0.20%, 50=75.98%, 100=0.50%, 250=13.67% 00:32:08.858 lat (msec) : 500=8.84% 00:32:08.858 cpu : usr=94.10%, sys=3.33%, ctx=134, majf=0, minf=21 00:32:08.858 IO depths : 1=4.9%, 2=10.7%, 4=23.5%, 8=53.1%, 16=7.9%, 32=0.0%, >=64=0.0% 00:32:08.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.858 complete : 0=0.0%, 4=93.8%, 8=0.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.858 issued rwts: total=1990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.858 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:08.858 filename2: (groupid=0, jobs=1): err= 0: pid=137250: Wed Apr 17 06:58:12 2024 00:32:08.858 read: IOPS=196, BW=786KiB/s (805kB/s)(7872KiB/10010msec) 00:32:08.858 slat (nsec): min=8032, max=87182, avg=29567.35, stdev=19019.13 00:32:08.858 clat (msec): min=10, max=368, avg=81.12, stdev=89.87 00:32:08.858 lat (msec): min=10, max=368, avg=81.15, stdev=89.88 00:32:08.858 clat percentiles (msec): 00:32:08.858 | 1.00th=[ 27], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 35], 00:32:08.858 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:32:08.858 | 70.00th=[ 36], 80.00th=[ 201], 90.00th=[ 249], 95.00th=[ 262], 00:32:08.858 | 99.00th=[ 359], 99.50th=[ 368], 99.90th=[ 368], 99.95th=[ 368], 00:32:08.858 | 99.99th=[ 368] 00:32:08.858 bw ( KiB/s): min= 128, max= 1888, per=3.74%, avg=723.37, stdev=718.21, samples=19 00:32:08.858 iops : min= 32, max= 472, avg=180.84, stdev=179.55, samples=19 00:32:08.858 lat (msec) : 20=0.51%, 50=77.03%, 100=0.61%, 250=12.91%, 500=8.94% 00:32:08.858 cpu : usr=98.04%, sys=1.50%, ctx=72, majf=0, minf=16 00:32:08.858 IO depths : 1=3.4%, 2=7.1%, 4=14.7%, 8=63.2%, 16=11.6%, 32=0.0%, >=64=0.0% 00:32:08.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.858 complete : 0=0.0%, 4=92.0%, 8=4.6%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.858 issued rwts: total=1968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.858 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:08.858 filename2: (groupid=0, jobs=1): err= 0: pid=137251: Wed Apr 17 06:58:12 2024 00:32:08.858 read: IOPS=199, BW=800KiB/s (819kB/s)(8000KiB/10003msec) 00:32:08.858 slat (usec): min=4, max=122, avg=63.32, stdev=23.61 00:32:08.858 clat (msec): min=16, max=332, avg=79.50, stdev=85.09 00:32:08.858 lat (msec): min=16, max=332, avg=79.56, stdev=85.08 00:32:08.858 clat percentiles (msec): 00:32:08.858 | 1.00th=[ 28], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:32:08.858 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:32:08.858 | 70.00th=[ 35], 80.00th=[ 192], 90.00th=[ 247], 95.00th=[ 255], 00:32:08.858 | 99.00th=[ 292], 99.50th=[ 296], 99.90th=[ 334], 99.95th=[ 334], 00:32:08.858 | 99.99th=[ 334] 00:32:08.858 bw ( KiB/s): min= 128, max= 1920, per=3.84%, avg=741.11, stdev=710.84, samples=19 00:32:08.858 iops : min= 32, max= 480, avg=185.26, stdev=177.71, samples=19 00:32:08.858 lat (msec) : 20=0.80%, 50=76.00%, 250=13.75%, 500=9.45% 00:32:08.858 cpu : usr=96.14%, sys=2.41%, ctx=284, majf=0, minf=23 00:32:08.858 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:32:08.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.858 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.858 issued rwts: total=2000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.858 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:08.858 filename2: (groupid=0, jobs=1): err= 0: pid=137252: Wed Apr 17 06:58:12 2024 00:32:08.858 read: IOPS=196, BW=787KiB/s (806kB/s)(7872KiB/10005msec) 00:32:08.858 slat (usec): min=13, max=128, avg=68.85, stdev=20.26 00:32:08.858 clat (msec): min=27, max=321, avg=80.73, stdev=87.19 00:32:08.858 lat (msec): min=27, max=321, avg=80.79, stdev=87.19 00:32:08.858 clat percentiles (msec): 00:32:08.858 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:32:08.858 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:32:08.858 | 70.00th=[ 35], 80.00th=[ 197], 90.00th=[ 251], 95.00th=[ 257], 00:32:08.858 | 99.00th=[ 292], 99.50th=[ 296], 99.90th=[ 321], 99.95th=[ 321], 00:32:08.858 | 99.99th=[ 321] 00:32:08.858 bw ( KiB/s): min= 128, max= 1920, per=3.76%, avg=727.58, stdev=725.42, samples=19 00:32:08.858 iops : min= 32, max= 480, avg=181.89, stdev=181.35, samples=19 00:32:08.858 lat (msec) : 50=77.24%, 250=12.55%, 500=10.21% 00:32:08.858 cpu : usr=98.21%, sys=1.31%, ctx=98, majf=0, minf=14 00:32:08.858 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:32:08.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.858 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.858 issued rwts: total=1968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.858 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:08.858 filename2: (groupid=0, jobs=1): err= 0: pid=137253: Wed Apr 17 06:58:12 2024 00:32:08.858 read: IOPS=197, BW=790KiB/s (809kB/s)(7904KiB/10010msec) 00:32:08.858 slat (nsec): min=8319, max=64084, avg=24137.64, stdev=7210.52 00:32:08.858 clat (msec): min=9, max=315, avg=80.85, stdev=87.73 00:32:08.858 lat (msec): min=9, max=315, avg=80.87, stdev=87.73 00:32:08.858 clat percentiles (msec): 00:32:08.858 | 1.00th=[ 21], 5.00th=[ 21], 10.00th=[ 35], 20.00th=[ 35], 00:32:08.858 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:32:08.858 | 70.00th=[ 36], 80.00th=[ 197], 90.00th=[ 249], 95.00th=[ 257], 00:32:08.858 | 99.00th=[ 279], 99.50th=[ 317], 99.90th=[ 317], 99.95th=[ 317], 00:32:08.858 | 99.99th=[ 317] 00:32:08.858 bw ( KiB/s): min= 128, max= 1904, per=3.76%, avg=727.58, stdev=724.97, samples=19 00:32:08.858 iops : min= 32, max= 476, avg=181.89, stdev=181.24, samples=19 00:32:08.858 lat (msec) : 10=0.40%, 20=0.76%, 50=76.16%, 100=0.81%, 250=13.77% 00:32:08.858 lat (msec) : 500=8.10% 00:32:08.858 cpu : usr=97.77%, sys=1.72%, ctx=41, majf=0, minf=16 00:32:08.858 IO depths : 1=1.7%, 2=6.8%, 4=20.4%, 8=59.3%, 16=11.8%, 32=0.0%, >=64=0.0% 00:32:08.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.858 complete : 0=0.0%, 4=93.4%, 8=2.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.858 issued rwts: total=1976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.858 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:08.858 filename2: (groupid=0, jobs=1): err= 0: pid=137254: Wed Apr 17 06:58:12 2024 00:32:08.858 read: IOPS=218, BW=873KiB/s (894kB/s)(8736KiB/10003msec) 00:32:08.858 slat (usec): min=7, max=192, avg=24.37, stdev=17.02 00:32:08.858 clat (msec): min=21, max=264, avg=73.07, stdev=63.80 00:32:08.858 lat (msec): min=21, max=264, avg=73.09, stdev=63.80 00:32:08.858 clat percentiles (msec): 00:32:08.858 | 1.00th=[ 22], 5.00th=[ 30], 10.00th=[ 34], 20.00th=[ 35], 00:32:08.858 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:32:08.858 | 70.00th=[ 36], 80.00th=[ 153], 90.00th=[ 190], 95.00th=[ 197], 00:32:08.858 | 99.00th=[ 215], 99.50th=[ 222], 99.90th=[ 266], 99.95th=[ 266], 00:32:08.858 | 99.99th=[ 266] 00:32:08.858 bw ( KiB/s): min= 256, max= 1920, per=4.23%, avg=818.53, stdev=688.89, samples=19 00:32:08.858 iops : min= 64, max= 480, avg=204.63, stdev=172.22, samples=19 00:32:08.858 lat (msec) : 50=71.34%, 250=28.30%, 500=0.37% 00:32:08.858 cpu : usr=97.86%, sys=1.65%, ctx=27, majf=0, minf=29 00:32:08.858 IO depths : 1=4.3%, 2=9.7%, 4=22.1%, 8=55.7%, 16=8.2%, 32=0.0%, >=64=0.0% 00:32:08.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.858 complete : 0=0.0%, 4=93.3%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.858 issued rwts: total=2184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.858 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:08.858 filename2: (groupid=0, jobs=1): err= 0: pid=137255: Wed Apr 17 06:58:12 2024 00:32:08.858 read: IOPS=196, BW=786KiB/s (805kB/s)(7872KiB/10015msec) 00:32:08.858 slat (usec): min=8, max=233, avg=57.38, stdev=24.17 00:32:08.858 clat (msec): min=27, max=332, avg=80.97, stdev=87.50 00:32:08.858 lat (msec): min=27, max=332, avg=81.03, stdev=87.50 00:32:08.858 clat percentiles (msec): 00:32:08.858 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:32:08.858 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:32:08.858 | 70.00th=[ 35], 80.00th=[ 194], 90.00th=[ 251], 95.00th=[ 257], 00:32:08.858 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 334], 99.95th=[ 334], 00:32:08.858 | 99.99th=[ 334] 00:32:08.858 bw ( KiB/s): min= 128, max= 1920, per=4.04%, avg=780.80, stdev=745.11, samples=20 00:32:08.858 iops : min= 32, max= 480, avg=195.20, stdev=186.28, samples=20 00:32:08.858 lat (msec) : 50=77.24%, 250=12.40%, 500=10.37% 00:32:08.858 cpu : usr=92.33%, sys=3.62%, ctx=213, majf=0, minf=20 00:32:08.858 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:32:08.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.858 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.858 issued rwts: total=1968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.859 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:08.859 filename2: (groupid=0, jobs=1): err= 0: pid=137256: Wed Apr 17 06:58:12 2024 00:32:08.859 read: IOPS=198, BW=793KiB/s (812kB/s)(7936KiB/10004msec) 00:32:08.859 slat (usec): min=8, max=186, avg=24.54, stdev=14.69 00:32:08.859 clat (msec): min=33, max=365, avg=80.43, stdev=85.34 00:32:08.859 lat (msec): min=33, max=365, avg=80.46, stdev=85.34 00:32:08.859 clat percentiles (msec): 00:32:08.859 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 35], 00:32:08.859 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:32:08.859 | 70.00th=[ 36], 80.00th=[ 192], 90.00th=[ 245], 95.00th=[ 255], 00:32:08.859 | 99.00th=[ 288], 99.50th=[ 288], 99.90th=[ 368], 99.95th=[ 368], 00:32:08.859 | 99.99th=[ 368] 00:32:08.859 bw ( KiB/s): min= 240, max= 1920, per=3.80%, avg=734.32, stdev=712.21, samples=19 00:32:08.859 iops : min= 60, max= 480, avg=183.58, stdev=178.05, samples=19 00:32:08.859 lat (msec) : 50=76.61%, 250=14.52%, 500=8.87% 00:32:08.859 cpu : usr=97.06%, sys=1.87%, ctx=31, majf=0, minf=14 00:32:08.859 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:32:08.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.859 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.859 issued rwts: total=1984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.859 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:08.859 00:32:08.859 Run status group 0 (all jobs): 00:32:08.859 READ: bw=18.9MiB/s (19.8MB/s), 786KiB/s-897KiB/s (805kB/s-918kB/s), io=189MiB (198MB), run=10003-10026msec 00:32:08.859 06:58:12 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:32:08.859 06:58:12 -- target/dif.sh@43 -- # local sub 00:32:08.859 06:58:12 -- target/dif.sh@45 -- # for sub in "$@" 00:32:08.859 06:58:12 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:08.859 06:58:12 -- target/dif.sh@36 -- # local sub_id=0 00:32:08.859 06:58:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:08.859 06:58:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:08.859 06:58:12 -- common/autotest_common.sh@10 -- # set +x 00:32:08.859 06:58:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:08.859 06:58:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:08.859 06:58:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:08.859 06:58:12 -- common/autotest_common.sh@10 -- # set +x 00:32:08.859 06:58:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:08.859 06:58:12 -- target/dif.sh@45 -- # for sub in "$@" 00:32:08.859 06:58:12 -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:08.859 06:58:12 -- target/dif.sh@36 -- # local sub_id=1 00:32:08.859 06:58:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:08.859 06:58:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:08.859 06:58:12 -- common/autotest_common.sh@10 -- # set +x 00:32:08.859 06:58:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:08.859 06:58:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:08.859 06:58:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:08.859 06:58:12 -- common/autotest_common.sh@10 -- # set +x 00:32:08.859 06:58:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:08.859 06:58:12 -- target/dif.sh@45 -- # for sub in "$@" 00:32:08.859 06:58:12 -- target/dif.sh@46 -- # destroy_subsystem 2 00:32:08.859 06:58:12 -- target/dif.sh@36 -- # local sub_id=2 00:32:08.859 06:58:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:08.859 06:58:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:08.859 06:58:12 -- common/autotest_common.sh@10 -- # set +x 00:32:08.859 06:58:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:08.859 06:58:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:32:08.859 06:58:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:08.859 06:58:12 -- common/autotest_common.sh@10 -- # set +x 00:32:08.859 06:58:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:08.859 06:58:12 -- target/dif.sh@115 -- # NULL_DIF=1 00:32:08.859 06:58:12 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:32:08.859 06:58:12 -- target/dif.sh@115 -- # numjobs=2 00:32:08.859 06:58:12 -- target/dif.sh@115 -- # iodepth=8 00:32:08.859 06:58:12 -- target/dif.sh@115 -- # runtime=5 00:32:08.859 06:58:12 -- target/dif.sh@115 -- # files=1 00:32:08.859 06:58:12 -- target/dif.sh@117 -- # create_subsystems 0 1 00:32:08.859 06:58:12 -- target/dif.sh@28 -- # local sub 00:32:08.859 06:58:12 -- target/dif.sh@30 -- # for sub in "$@" 00:32:08.859 06:58:12 -- target/dif.sh@31 -- # create_subsystem 0 00:32:08.859 06:58:12 -- target/dif.sh@18 -- # local sub_id=0 00:32:08.859 06:58:12 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:08.859 06:58:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:08.859 06:58:12 -- common/autotest_common.sh@10 -- # set +x 00:32:08.859 bdev_null0 00:32:08.859 06:58:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:08.859 06:58:12 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:08.859 06:58:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:08.859 06:58:12 -- common/autotest_common.sh@10 -- # set +x 00:32:08.859 06:58:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:08.859 06:58:12 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:08.859 06:58:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:08.859 06:58:12 -- common/autotest_common.sh@10 -- # set +x 00:32:08.859 06:58:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:08.859 06:58:12 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:08.859 06:58:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:08.859 06:58:12 -- common/autotest_common.sh@10 -- # set +x 00:32:08.859 [2024-04-17 06:58:12.460023] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:08.859 06:58:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:08.859 06:58:12 -- target/dif.sh@30 -- # for sub in "$@" 00:32:08.859 06:58:12 -- target/dif.sh@31 -- # create_subsystem 1 00:32:08.859 06:58:12 -- target/dif.sh@18 -- # local sub_id=1 00:32:08.859 06:58:12 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:08.859 06:58:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:08.859 06:58:12 -- common/autotest_common.sh@10 -- # set +x 00:32:08.859 bdev_null1 00:32:08.859 06:58:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:08.859 06:58:12 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:08.859 06:58:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:08.859 06:58:12 -- common/autotest_common.sh@10 -- # set +x 00:32:08.859 06:58:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:08.859 06:58:12 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:08.859 06:58:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:08.859 06:58:12 -- common/autotest_common.sh@10 -- # set +x 00:32:08.859 06:58:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:08.859 06:58:12 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:08.859 06:58:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:08.859 06:58:12 -- common/autotest_common.sh@10 -- # set +x 00:32:08.859 06:58:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:08.859 06:58:12 -- target/dif.sh@118 -- # fio /dev/fd/62 00:32:08.859 06:58:12 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:32:08.859 06:58:12 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:08.859 06:58:12 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:08.859 06:58:12 -- nvmf/common.sh@521 -- # config=() 00:32:08.859 06:58:12 -- nvmf/common.sh@521 -- # local subsystem config 00:32:08.859 06:58:12 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:08.859 06:58:12 -- target/dif.sh@82 -- # gen_fio_conf 00:32:08.859 06:58:12 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:08.859 06:58:12 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:32:08.859 06:58:12 -- target/dif.sh@54 -- # local file 00:32:08.859 06:58:12 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:08.859 { 00:32:08.859 "params": { 00:32:08.859 "name": "Nvme$subsystem", 00:32:08.859 "trtype": "$TEST_TRANSPORT", 00:32:08.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:08.859 "adrfam": "ipv4", 00:32:08.859 "trsvcid": "$NVMF_PORT", 00:32:08.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:08.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:08.859 "hdgst": ${hdgst:-false}, 00:32:08.859 "ddgst": ${ddgst:-false} 00:32:08.859 }, 00:32:08.859 "method": "bdev_nvme_attach_controller" 00:32:08.859 } 00:32:08.859 EOF 00:32:08.859 )") 00:32:08.859 06:58:12 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:08.859 06:58:12 -- target/dif.sh@56 -- # cat 00:32:08.859 06:58:12 -- common/autotest_common.sh@1325 -- # local sanitizers 00:32:08.859 06:58:12 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:08.859 06:58:12 -- common/autotest_common.sh@1327 -- # shift 00:32:08.859 06:58:12 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:32:08.859 06:58:12 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:08.859 06:58:12 -- nvmf/common.sh@543 -- # cat 00:32:08.859 06:58:12 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:08.859 06:58:12 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:08.859 06:58:12 -- target/dif.sh@72 -- # (( file <= files )) 00:32:08.859 06:58:12 -- target/dif.sh@73 -- # cat 00:32:08.859 06:58:12 -- common/autotest_common.sh@1331 -- # grep libasan 00:32:08.859 06:58:12 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:08.859 06:58:12 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:08.859 06:58:12 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:08.859 { 00:32:08.859 "params": { 00:32:08.859 "name": "Nvme$subsystem", 00:32:08.859 "trtype": "$TEST_TRANSPORT", 00:32:08.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:08.859 "adrfam": "ipv4", 00:32:08.859 "trsvcid": "$NVMF_PORT", 00:32:08.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:08.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:08.859 "hdgst": ${hdgst:-false}, 00:32:08.859 "ddgst": ${ddgst:-false} 00:32:08.859 }, 00:32:08.859 "method": "bdev_nvme_attach_controller" 00:32:08.859 } 00:32:08.859 EOF 00:32:08.859 )") 00:32:08.859 06:58:12 -- target/dif.sh@72 -- # (( file++ )) 00:32:08.859 06:58:12 -- target/dif.sh@72 -- # (( file <= files )) 00:32:08.860 06:58:12 -- nvmf/common.sh@543 -- # cat 00:32:08.860 06:58:12 -- nvmf/common.sh@545 -- # jq . 00:32:08.860 06:58:12 -- nvmf/common.sh@546 -- # IFS=, 00:32:08.860 06:58:12 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:32:08.860 "params": { 00:32:08.860 "name": "Nvme0", 00:32:08.860 "trtype": "tcp", 00:32:08.860 "traddr": "10.0.0.2", 00:32:08.860 "adrfam": "ipv4", 00:32:08.860 "trsvcid": "4420", 00:32:08.860 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:08.860 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:08.860 "hdgst": false, 00:32:08.860 "ddgst": false 00:32:08.860 }, 00:32:08.860 "method": "bdev_nvme_attach_controller" 00:32:08.860 },{ 00:32:08.860 "params": { 00:32:08.860 "name": "Nvme1", 00:32:08.860 "trtype": "tcp", 00:32:08.860 "traddr": "10.0.0.2", 00:32:08.860 "adrfam": "ipv4", 00:32:08.860 "trsvcid": "4420", 00:32:08.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:08.860 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:08.860 "hdgst": false, 00:32:08.860 "ddgst": false 00:32:08.860 }, 00:32:08.860 "method": "bdev_nvme_attach_controller" 00:32:08.860 }' 00:32:08.860 06:58:12 -- common/autotest_common.sh@1331 -- # asan_lib= 00:32:08.860 06:58:12 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:32:08.860 06:58:12 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:08.860 06:58:12 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:08.860 06:58:12 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:32:08.860 06:58:12 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:08.860 06:58:12 -- common/autotest_common.sh@1331 -- # asan_lib= 00:32:08.860 06:58:12 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:32:08.860 06:58:12 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:08.860 06:58:12 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:08.860 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:08.860 ... 00:32:08.860 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:08.860 ... 00:32:08.860 fio-3.35 00:32:08.860 Starting 4 threads 00:32:08.860 EAL: No free 2048 kB hugepages reported on node 1 00:32:14.121 00:32:14.121 filename0: (groupid=0, jobs=1): err= 0: pid=138750: Wed Apr 17 06:58:18 2024 00:32:14.121 read: IOPS=1841, BW=14.4MiB/s (15.1MB/s)(72.0MiB/5003msec) 00:32:14.121 slat (nsec): min=6231, max=56939, avg=11793.33, stdev=5760.06 00:32:14.121 clat (usec): min=1910, max=7401, avg=4309.10, stdev=701.69 00:32:14.121 lat (usec): min=1923, max=7415, avg=4320.90, stdev=701.49 00:32:14.121 clat percentiles (usec): 00:32:14.121 | 1.00th=[ 2999], 5.00th=[ 3458], 10.00th=[ 3654], 20.00th=[ 3851], 00:32:14.121 | 30.00th=[ 3982], 40.00th=[ 4080], 50.00th=[ 4178], 60.00th=[ 4293], 00:32:14.121 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 5276], 95.00th=[ 6063], 00:32:14.121 | 99.00th=[ 6521], 99.50th=[ 6652], 99.90th=[ 7046], 99.95th=[ 7111], 00:32:14.121 | 99.99th=[ 7373] 00:32:14.121 bw ( KiB/s): min=14384, max=15440, per=25.20%, avg=14734.10, stdev=308.33, samples=10 00:32:14.121 iops : min= 1798, max= 1930, avg=1841.70, stdev=38.52, samples=10 00:32:14.121 lat (msec) : 2=0.01%, 4=31.38%, 10=68.61% 00:32:14.121 cpu : usr=94.82%, sys=4.70%, ctx=8, majf=0, minf=31 00:32:14.121 IO depths : 1=0.2%, 2=2.1%, 4=68.7%, 8=29.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:14.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.121 complete : 0=0.0%, 4=94.0%, 8=6.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.121 issued rwts: total=9212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:14.121 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:14.121 filename0: (groupid=0, jobs=1): err= 0: pid=138751: Wed Apr 17 06:58:18 2024 00:32:14.121 read: IOPS=1845, BW=14.4MiB/s (15.1MB/s)(72.1MiB/5001msec) 00:32:14.121 slat (nsec): min=6302, max=58003, avg=11114.94, stdev=5530.32 00:32:14.121 clat (usec): min=1119, max=7491, avg=4301.15, stdev=710.75 00:32:14.121 lat (usec): min=1131, max=7510, avg=4312.27, stdev=710.19 00:32:14.121 clat percentiles (usec): 00:32:14.121 | 1.00th=[ 2966], 5.00th=[ 3458], 10.00th=[ 3654], 20.00th=[ 3818], 00:32:14.121 | 30.00th=[ 3949], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4293], 00:32:14.121 | 70.00th=[ 4359], 80.00th=[ 4555], 90.00th=[ 5342], 95.00th=[ 5997], 00:32:14.121 | 99.00th=[ 6456], 99.50th=[ 6849], 99.90th=[ 7177], 99.95th=[ 7308], 00:32:14.121 | 99.99th=[ 7504] 00:32:14.121 bw ( KiB/s): min=14384, max=15408, per=25.23%, avg=14755.56, stdev=415.37, samples=9 00:32:14.121 iops : min= 1798, max= 1926, avg=1844.44, stdev=51.92, samples=9 00:32:14.121 lat (msec) : 2=0.03%, 4=32.65%, 10=67.32% 00:32:14.121 cpu : usr=94.86%, sys=4.62%, ctx=9, majf=0, minf=39 00:32:14.121 IO depths : 1=0.1%, 2=3.1%, 4=68.9%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:14.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.121 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.121 issued rwts: total=9228,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:14.121 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:14.121 filename1: (groupid=0, jobs=1): err= 0: pid=138752: Wed Apr 17 06:58:18 2024 00:32:14.121 read: IOPS=1831, BW=14.3MiB/s (15.0MB/s)(71.6MiB/5004msec) 00:32:14.121 slat (usec): min=6, max=188, avg=11.35, stdev= 5.72 00:32:14.121 clat (usec): min=2319, max=7518, avg=4333.62, stdev=640.08 00:32:14.121 lat (usec): min=2326, max=7525, avg=4344.97, stdev=639.62 00:32:14.121 clat percentiles (usec): 00:32:14.121 | 1.00th=[ 3064], 5.00th=[ 3589], 10.00th=[ 3785], 20.00th=[ 3949], 00:32:14.121 | 30.00th=[ 4047], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4293], 00:32:14.121 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 5014], 95.00th=[ 5932], 00:32:14.122 | 99.00th=[ 6587], 99.50th=[ 6915], 99.90th=[ 7177], 99.95th=[ 7308], 00:32:14.122 | 99.99th=[ 7504] 00:32:14.122 bw ( KiB/s): min=14160, max=15216, per=25.06%, avg=14654.40, stdev=362.00, samples=10 00:32:14.122 iops : min= 1770, max= 1902, avg=1831.80, stdev=45.25, samples=10 00:32:14.122 lat (msec) : 4=23.57%, 10=76.43% 00:32:14.122 cpu : usr=94.88%, sys=4.62%, ctx=11, majf=0, minf=77 00:32:14.122 IO depths : 1=0.1%, 2=1.0%, 4=69.4%, 8=29.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:14.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.122 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.122 issued rwts: total=9167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:14.122 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:14.122 filename1: (groupid=0, jobs=1): err= 0: pid=138753: Wed Apr 17 06:58:18 2024 00:32:14.122 read: IOPS=1792, BW=14.0MiB/s (14.7MB/s)(70.1MiB/5002msec) 00:32:14.122 slat (nsec): min=6246, max=58738, avg=15183.29, stdev=7489.91 00:32:14.122 clat (usec): min=2231, max=47324, avg=4414.02, stdev=1466.37 00:32:14.122 lat (usec): min=2247, max=47344, avg=4429.21, stdev=1465.80 00:32:14.122 clat percentiles (usec): 00:32:14.122 | 1.00th=[ 3228], 5.00th=[ 3589], 10.00th=[ 3752], 20.00th=[ 3916], 00:32:14.122 | 30.00th=[ 4015], 40.00th=[ 4113], 50.00th=[ 4228], 60.00th=[ 4293], 00:32:14.122 | 70.00th=[ 4424], 80.00th=[ 4621], 90.00th=[ 5538], 95.00th=[ 6063], 00:32:14.122 | 99.00th=[ 6587], 99.50th=[ 6849], 99.90th=[ 7963], 99.95th=[47449], 00:32:14.122 | 99.99th=[47449] 00:32:14.122 bw ( KiB/s): min=12720, max=15328, per=24.52%, avg=14336.00, stdev=693.56, samples=9 00:32:14.122 iops : min= 1590, max= 1916, avg=1792.00, stdev=86.69, samples=9 00:32:14.122 lat (msec) : 4=27.32%, 10=72.59%, 50=0.09% 00:32:14.122 cpu : usr=94.50%, sys=4.82%, ctx=9, majf=0, minf=53 00:32:14.122 IO depths : 1=0.4%, 2=1.2%, 4=71.9%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:14.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.122 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.122 issued rwts: total=8967,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:14.122 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:14.122 00:32:14.122 Run status group 0 (all jobs): 00:32:14.122 READ: bw=57.1MiB/s (59.9MB/s), 14.0MiB/s-14.4MiB/s (14.7MB/s-15.1MB/s), io=286MiB (300MB), run=5001-5004msec 00:32:14.380 06:58:18 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:14.380 06:58:18 -- target/dif.sh@43 -- # local sub 00:32:14.380 06:58:18 -- target/dif.sh@45 -- # for sub in "$@" 00:32:14.380 06:58:18 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:14.380 06:58:18 -- target/dif.sh@36 -- # local sub_id=0 00:32:14.380 06:58:18 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:14.380 06:58:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:14.380 06:58:18 -- common/autotest_common.sh@10 -- # set +x 00:32:14.380 06:58:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:14.380 06:58:18 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:14.380 06:58:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:14.380 06:58:18 -- common/autotest_common.sh@10 -- # set +x 00:32:14.380 06:58:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:14.380 06:58:18 -- target/dif.sh@45 -- # for sub in "$@" 00:32:14.380 06:58:18 -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:14.380 06:58:18 -- target/dif.sh@36 -- # local sub_id=1 00:32:14.380 06:58:18 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:14.380 06:58:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:14.380 06:58:18 -- common/autotest_common.sh@10 -- # set +x 00:32:14.380 06:58:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:14.380 06:58:18 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:14.380 06:58:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:14.380 06:58:18 -- common/autotest_common.sh@10 -- # set +x 00:32:14.380 06:58:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:14.380 00:32:14.380 real 0m24.723s 00:32:14.380 user 4m30.528s 00:32:14.380 sys 0m7.748s 00:32:14.380 06:58:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:14.380 06:58:18 -- common/autotest_common.sh@10 -- # set +x 00:32:14.380 ************************************ 00:32:14.380 END TEST fio_dif_rand_params 00:32:14.380 ************************************ 00:32:14.380 06:58:18 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:14.380 06:58:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:14.380 06:58:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:14.380 06:58:18 -- common/autotest_common.sh@10 -- # set +x 00:32:14.638 ************************************ 00:32:14.638 START TEST fio_dif_digest 00:32:14.638 ************************************ 00:32:14.638 06:58:19 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:32:14.638 06:58:19 -- target/dif.sh@123 -- # local NULL_DIF 00:32:14.638 06:58:19 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:14.638 06:58:19 -- target/dif.sh@125 -- # local hdgst ddgst 00:32:14.638 06:58:19 -- target/dif.sh@127 -- # NULL_DIF=3 00:32:14.638 06:58:19 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:14.638 06:58:19 -- target/dif.sh@127 -- # numjobs=3 00:32:14.638 06:58:19 -- target/dif.sh@127 -- # iodepth=3 00:32:14.638 06:58:19 -- target/dif.sh@127 -- # runtime=10 00:32:14.638 06:58:19 -- target/dif.sh@128 -- # hdgst=true 00:32:14.638 06:58:19 -- target/dif.sh@128 -- # ddgst=true 00:32:14.638 06:58:19 -- target/dif.sh@130 -- # create_subsystems 0 00:32:14.638 06:58:19 -- target/dif.sh@28 -- # local sub 00:32:14.638 06:58:19 -- target/dif.sh@30 -- # for sub in "$@" 00:32:14.638 06:58:19 -- target/dif.sh@31 -- # create_subsystem 0 00:32:14.638 06:58:19 -- target/dif.sh@18 -- # local sub_id=0 00:32:14.638 06:58:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:14.638 06:58:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:14.638 06:58:19 -- common/autotest_common.sh@10 -- # set +x 00:32:14.638 bdev_null0 00:32:14.638 06:58:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:14.638 06:58:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:14.638 06:58:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:14.638 06:58:19 -- common/autotest_common.sh@10 -- # set +x 00:32:14.638 06:58:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:14.638 06:58:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:14.639 06:58:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:14.639 06:58:19 -- common/autotest_common.sh@10 -- # set +x 00:32:14.639 06:58:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:14.639 06:58:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:14.639 06:58:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:14.639 06:58:19 -- common/autotest_common.sh@10 -- # set +x 00:32:14.639 [2024-04-17 06:58:19.084864] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:14.639 06:58:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:14.639 06:58:19 -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:14.639 06:58:19 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:14.639 06:58:19 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:14.639 06:58:19 -- nvmf/common.sh@521 -- # config=() 00:32:14.639 06:58:19 -- nvmf/common.sh@521 -- # local subsystem config 00:32:14.639 06:58:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:14.639 06:58:19 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:14.639 06:58:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:14.639 { 00:32:14.639 "params": { 00:32:14.639 "name": "Nvme$subsystem", 00:32:14.639 "trtype": "$TEST_TRANSPORT", 00:32:14.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:14.639 "adrfam": "ipv4", 00:32:14.639 "trsvcid": "$NVMF_PORT", 00:32:14.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:14.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:14.639 "hdgst": ${hdgst:-false}, 00:32:14.639 "ddgst": ${ddgst:-false} 00:32:14.639 }, 00:32:14.639 "method": "bdev_nvme_attach_controller" 00:32:14.639 } 00:32:14.639 EOF 00:32:14.639 )") 00:32:14.639 06:58:19 -- target/dif.sh@82 -- # gen_fio_conf 00:32:14.639 06:58:19 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:14.639 06:58:19 -- target/dif.sh@54 -- # local file 00:32:14.639 06:58:19 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:32:14.639 06:58:19 -- target/dif.sh@56 -- # cat 00:32:14.639 06:58:19 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:14.639 06:58:19 -- common/autotest_common.sh@1325 -- # local sanitizers 00:32:14.639 06:58:19 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:14.639 06:58:19 -- common/autotest_common.sh@1327 -- # shift 00:32:14.639 06:58:19 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:32:14.639 06:58:19 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:14.639 06:58:19 -- nvmf/common.sh@543 -- # cat 00:32:14.639 06:58:19 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:14.639 06:58:19 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:14.639 06:58:19 -- target/dif.sh@72 -- # (( file <= files )) 00:32:14.639 06:58:19 -- common/autotest_common.sh@1331 -- # grep libasan 00:32:14.639 06:58:19 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:14.639 06:58:19 -- nvmf/common.sh@545 -- # jq . 00:32:14.639 06:58:19 -- nvmf/common.sh@546 -- # IFS=, 00:32:14.639 06:58:19 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:32:14.639 "params": { 00:32:14.639 "name": "Nvme0", 00:32:14.639 "trtype": "tcp", 00:32:14.639 "traddr": "10.0.0.2", 00:32:14.639 "adrfam": "ipv4", 00:32:14.639 "trsvcid": "4420", 00:32:14.639 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:14.639 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:14.639 "hdgst": true, 00:32:14.639 "ddgst": true 00:32:14.639 }, 00:32:14.639 "method": "bdev_nvme_attach_controller" 00:32:14.639 }' 00:32:14.639 06:58:19 -- common/autotest_common.sh@1331 -- # asan_lib= 00:32:14.639 06:58:19 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:32:14.639 06:58:19 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:14.639 06:58:19 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:14.639 06:58:19 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:32:14.639 06:58:19 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:14.639 06:58:19 -- common/autotest_common.sh@1331 -- # asan_lib= 00:32:14.639 06:58:19 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:32:14.639 06:58:19 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:14.639 06:58:19 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:14.897 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:14.897 ... 00:32:14.897 fio-3.35 00:32:14.897 Starting 3 threads 00:32:14.897 EAL: No free 2048 kB hugepages reported on node 1 00:32:27.094 00:32:27.094 filename0: (groupid=0, jobs=1): err= 0: pid=139516: Wed Apr 17 06:58:29 2024 00:32:27.094 read: IOPS=195, BW=24.5MiB/s (25.7MB/s)(246MiB/10045msec) 00:32:27.094 slat (nsec): min=5037, max=72464, avg=15110.60, stdev=5198.28 00:32:27.094 clat (usec): min=8682, max=58225, avg=15279.88, stdev=3888.26 00:32:27.094 lat (usec): min=8695, max=58239, avg=15294.99, stdev=3888.38 00:32:27.094 clat percentiles (usec): 00:32:27.094 | 1.00th=[10028], 5.00th=[11076], 10.00th=[12387], 20.00th=[13829], 00:32:27.094 | 30.00th=[14484], 40.00th=[14877], 50.00th=[15270], 60.00th=[15664], 00:32:27.094 | 70.00th=[15926], 80.00th=[16319], 90.00th=[16909], 95.00th=[17433], 00:32:27.094 | 99.00th=[19006], 99.50th=[55313], 99.90th=[57934], 99.95th=[58459], 00:32:27.094 | 99.99th=[58459] 00:32:27.094 bw ( KiB/s): min=22272, max=28416, per=34.48%, avg=25152.00, stdev=1414.11, samples=20 00:32:27.094 iops : min= 174, max= 222, avg=196.50, stdev=11.05, samples=20 00:32:27.094 lat (msec) : 10=0.92%, 20=98.32%, 50=0.05%, 100=0.71% 00:32:27.094 cpu : usr=91.73%, sys=7.78%, ctx=24, majf=0, minf=225 00:32:27.094 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:27.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.094 issued rwts: total=1967,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.094 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:27.094 filename0: (groupid=0, jobs=1): err= 0: pid=139517: Wed Apr 17 06:58:29 2024 00:32:27.094 read: IOPS=186, BW=23.3MiB/s (24.5MB/s)(235MiB/10048msec) 00:32:27.094 slat (nsec): min=5944, max=54100, avg=15285.01, stdev=5438.32 00:32:27.094 clat (usec): min=9155, max=59484, avg=16015.96, stdev=6002.58 00:32:27.094 lat (usec): min=9174, max=59497, avg=16031.24, stdev=6002.35 00:32:27.094 clat percentiles (usec): 00:32:27.094 | 1.00th=[ 9765], 5.00th=[11338], 10.00th=[13173], 20.00th=[14222], 00:32:27.094 | 30.00th=[14746], 40.00th=[15139], 50.00th=[15401], 60.00th=[15795], 00:32:27.094 | 70.00th=[16057], 80.00th=[16581], 90.00th=[17171], 95.00th=[17695], 00:32:27.094 | 99.00th=[55837], 99.50th=[57410], 99.90th=[58459], 99.95th=[59507], 00:32:27.094 | 99.99th=[59507] 00:32:27.094 bw ( KiB/s): min=18432, max=27904, per=32.90%, avg=24000.00, stdev=2128.72, samples=20 00:32:27.094 iops : min= 144, max= 218, avg=187.50, stdev=16.63, samples=20 00:32:27.094 lat (msec) : 10=1.44%, 20=96.38%, 50=0.16%, 100=2.02% 00:32:27.094 cpu : usr=91.82%, sys=7.69%, ctx=30, majf=0, minf=55 00:32:27.094 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:27.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.094 issued rwts: total=1877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.094 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:27.094 filename0: (groupid=0, jobs=1): err= 0: pid=139518: Wed Apr 17 06:58:29 2024 00:32:27.094 read: IOPS=187, BW=23.4MiB/s (24.6MB/s)(235MiB/10048msec) 00:32:27.094 slat (nsec): min=5906, max=38235, avg=13929.33, stdev=3939.94 00:32:27.094 clat (usec): min=8984, max=58557, avg=15946.03, stdev=5012.64 00:32:27.094 lat (usec): min=8996, max=58572, avg=15959.96, stdev=5012.69 00:32:27.094 clat percentiles (usec): 00:32:27.094 | 1.00th=[ 9896], 5.00th=[11076], 10.00th=[13042], 20.00th=[14484], 00:32:27.094 | 30.00th=[15008], 40.00th=[15401], 50.00th=[15664], 60.00th=[16057], 00:32:27.094 | 70.00th=[16319], 80.00th=[16712], 90.00th=[17433], 95.00th=[17957], 00:32:27.094 | 99.00th=[55837], 99.50th=[57410], 99.90th=[58459], 99.95th=[58459], 00:32:27.094 | 99.99th=[58459] 00:32:27.094 bw ( KiB/s): min=21248, max=27136, per=32.99%, avg=24066.40, stdev=1529.71, samples=20 00:32:27.094 iops : min= 166, max= 212, avg=188.00, stdev=11.95, samples=20 00:32:27.094 lat (msec) : 10=1.27%, 20=97.24%, 50=0.21%, 100=1.27% 00:32:27.094 cpu : usr=91.45%, sys=8.05%, ctx=23, majf=0, minf=96 00:32:27.094 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:27.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.094 issued rwts: total=1883,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.094 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:27.094 00:32:27.094 Run status group 0 (all jobs): 00:32:27.094 READ: bw=71.2MiB/s (74.7MB/s), 23.3MiB/s-24.5MiB/s (24.5MB/s-25.7MB/s), io=716MiB (751MB), run=10045-10048msec 00:32:27.094 06:58:30 -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:27.094 06:58:30 -- target/dif.sh@43 -- # local sub 00:32:27.094 06:58:30 -- target/dif.sh@45 -- # for sub in "$@" 00:32:27.094 06:58:30 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:27.094 06:58:30 -- target/dif.sh@36 -- # local sub_id=0 00:32:27.094 06:58:30 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:27.094 06:58:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:27.094 06:58:30 -- common/autotest_common.sh@10 -- # set +x 00:32:27.094 06:58:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:27.094 06:58:30 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:27.094 06:58:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:27.094 06:58:30 -- common/autotest_common.sh@10 -- # set +x 00:32:27.094 06:58:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:27.094 00:32:27.094 real 0m11.213s 00:32:27.094 user 0m28.844s 00:32:27.094 sys 0m2.636s 00:32:27.094 06:58:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:27.094 06:58:30 -- common/autotest_common.sh@10 -- # set +x 00:32:27.094 ************************************ 00:32:27.094 END TEST fio_dif_digest 00:32:27.094 ************************************ 00:32:27.094 06:58:30 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:27.094 06:58:30 -- target/dif.sh@147 -- # nvmftestfini 00:32:27.094 06:58:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:32:27.094 06:58:30 -- nvmf/common.sh@117 -- # sync 00:32:27.094 06:58:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:27.094 06:58:30 -- nvmf/common.sh@120 -- # set +e 00:32:27.094 06:58:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:27.094 06:58:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:27.094 rmmod nvme_tcp 00:32:27.094 rmmod nvme_fabrics 00:32:27.094 rmmod nvme_keyring 00:32:27.094 06:58:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:27.094 06:58:30 -- nvmf/common.sh@124 -- # set -e 00:32:27.094 06:58:30 -- nvmf/common.sh@125 -- # return 0 00:32:27.094 06:58:30 -- nvmf/common.sh@478 -- # '[' -n 133308 ']' 00:32:27.094 06:58:30 -- nvmf/common.sh@479 -- # killprocess 133308 00:32:27.094 06:58:30 -- common/autotest_common.sh@936 -- # '[' -z 133308 ']' 00:32:27.094 06:58:30 -- common/autotest_common.sh@940 -- # kill -0 133308 00:32:27.094 06:58:30 -- common/autotest_common.sh@941 -- # uname 00:32:27.094 06:58:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:27.094 06:58:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 133308 00:32:27.094 06:58:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:27.094 06:58:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:27.094 06:58:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 133308' 00:32:27.094 killing process with pid 133308 00:32:27.094 06:58:30 -- common/autotest_common.sh@955 -- # kill 133308 00:32:27.094 06:58:30 -- common/autotest_common.sh@960 -- # wait 133308 00:32:27.094 06:58:30 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:32:27.094 06:58:30 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:27.094 Waiting for block devices as requested 00:32:27.353 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:27.353 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:27.353 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:27.611 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:27.611 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:27.611 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:27.611 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:27.611 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:27.870 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:27.870 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:27.870 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:28.128 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:28.128 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:28.128 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:28.128 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:28.386 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:28.386 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:28.386 06:58:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:32:28.386 06:58:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:32:28.386 06:58:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:28.386 06:58:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:28.386 06:58:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:28.386 06:58:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:28.386 06:58:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:30.953 06:58:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:30.953 00:32:30.953 real 1m7.457s 00:32:30.953 user 6m27.688s 00:32:30.953 sys 0m19.726s 00:32:30.953 06:58:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:30.953 06:58:34 -- common/autotest_common.sh@10 -- # set +x 00:32:30.953 ************************************ 00:32:30.953 END TEST nvmf_dif 00:32:30.953 ************************************ 00:32:30.953 06:58:35 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:30.953 06:58:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:30.953 06:58:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:30.953 06:58:35 -- common/autotest_common.sh@10 -- # set +x 00:32:30.953 ************************************ 00:32:30.953 START TEST nvmf_abort_qd_sizes 00:32:30.953 ************************************ 00:32:30.953 06:58:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:30.953 * Looking for test storage... 00:32:30.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:30.953 06:58:35 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:30.953 06:58:35 -- nvmf/common.sh@7 -- # uname -s 00:32:30.953 06:58:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:30.953 06:58:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:30.953 06:58:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:30.953 06:58:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:30.953 06:58:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:30.953 06:58:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:30.953 06:58:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:30.953 06:58:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:30.953 06:58:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:30.953 06:58:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:30.953 06:58:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:30.953 06:58:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:30.953 06:58:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:30.953 06:58:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:30.953 06:58:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:30.953 06:58:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:30.953 06:58:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:30.953 06:58:35 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:30.953 06:58:35 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:30.953 06:58:35 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:30.953 06:58:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.953 06:58:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.953 06:58:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.953 06:58:35 -- paths/export.sh@5 -- # export PATH 00:32:30.954 06:58:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.954 06:58:35 -- nvmf/common.sh@47 -- # : 0 00:32:30.954 06:58:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:30.954 06:58:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:30.954 06:58:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:30.954 06:58:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:30.954 06:58:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:30.954 06:58:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:30.954 06:58:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:30.954 06:58:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:30.954 06:58:35 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:30.954 06:58:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:32:30.954 06:58:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:30.954 06:58:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:32:30.954 06:58:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:32:30.954 06:58:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:32:30.954 06:58:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:30.954 06:58:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:30.954 06:58:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:30.954 06:58:35 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:32:30.954 06:58:35 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:32:30.954 06:58:35 -- nvmf/common.sh@285 -- # xtrace_disable 00:32:30.954 06:58:35 -- common/autotest_common.sh@10 -- # set +x 00:32:32.853 06:58:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:32.853 06:58:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:32:32.853 06:58:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:32.853 06:58:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:32.853 06:58:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:32.853 06:58:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:32.853 06:58:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:32.853 06:58:37 -- nvmf/common.sh@295 -- # net_devs=() 00:32:32.853 06:58:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:32.853 06:58:37 -- nvmf/common.sh@296 -- # e810=() 00:32:32.853 06:58:37 -- nvmf/common.sh@296 -- # local -ga e810 00:32:32.853 06:58:37 -- nvmf/common.sh@297 -- # x722=() 00:32:32.853 06:58:37 -- nvmf/common.sh@297 -- # local -ga x722 00:32:32.853 06:58:37 -- nvmf/common.sh@298 -- # mlx=() 00:32:32.853 06:58:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:32:32.853 06:58:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:32.853 06:58:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:32.853 06:58:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:32.853 06:58:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:32.853 06:58:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:32.853 06:58:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:32.853 06:58:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:32.853 06:58:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:32.853 06:58:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:32.853 06:58:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:32.853 06:58:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:32.853 06:58:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:32.853 06:58:37 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:32.853 06:58:37 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:32.853 06:58:37 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:32.853 06:58:37 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:32.853 06:58:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:32.853 06:58:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:32.853 06:58:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:32.853 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:32.854 06:58:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:32.854 06:58:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:32.854 06:58:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.854 06:58:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.854 06:58:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:32.854 06:58:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:32.854 06:58:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:32.854 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:32.854 06:58:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:32.854 06:58:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:32.854 06:58:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.854 06:58:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.854 06:58:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:32.854 06:58:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:32.854 06:58:37 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:32.854 06:58:37 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:32.854 06:58:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:32.854 06:58:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.854 06:58:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:32:32.854 06:58:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.854 06:58:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:32.854 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:32.854 06:58:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.854 06:58:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:32.854 06:58:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.854 06:58:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:32:32.854 06:58:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.854 06:58:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:32.854 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:32.854 06:58:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.854 06:58:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:32:32.854 06:58:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:32:32.854 06:58:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:32:32.854 06:58:37 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:32:32.854 06:58:37 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:32:32.854 06:58:37 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:32.854 06:58:37 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:32.854 06:58:37 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:32.854 06:58:37 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:32.854 06:58:37 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:32.854 06:58:37 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:32.854 06:58:37 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:32.854 06:58:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:32.854 06:58:37 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:32.854 06:58:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:32.854 06:58:37 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:32.854 06:58:37 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:32.854 06:58:37 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:32.854 06:58:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:32.854 06:58:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:32.854 06:58:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:32.854 06:58:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:32.854 06:58:37 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:32.854 06:58:37 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:32.854 06:58:37 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:32.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:32.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:32:32.854 00:32:32.854 --- 10.0.0.2 ping statistics --- 00:32:32.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:32.854 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:32:32.854 06:58:37 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:32.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:32.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:32:32.854 00:32:32.854 --- 10.0.0.1 ping statistics --- 00:32:32.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:32.854 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:32:32.854 06:58:37 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:32.854 06:58:37 -- nvmf/common.sh@411 -- # return 0 00:32:32.854 06:58:37 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:32:32.854 06:58:37 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:33.789 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:33.789 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:33.789 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:33.789 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:33.789 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:33.789 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:33.789 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:33.789 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:33.789 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:33.789 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:33.789 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:33.789 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:33.789 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:33.789 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:33.789 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:33.789 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:34.723 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:34.981 06:58:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:34.981 06:58:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:32:34.981 06:58:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:32:34.981 06:58:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:34.981 06:58:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:32:34.981 06:58:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:32:34.981 06:58:39 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:34.981 06:58:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:32:34.981 06:58:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:34.981 06:58:39 -- common/autotest_common.sh@10 -- # set +x 00:32:34.981 06:58:39 -- nvmf/common.sh@470 -- # nvmfpid=144434 00:32:34.981 06:58:39 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:34.981 06:58:39 -- nvmf/common.sh@471 -- # waitforlisten 144434 00:32:34.981 06:58:39 -- common/autotest_common.sh@817 -- # '[' -z 144434 ']' 00:32:34.981 06:58:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:34.981 06:58:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:34.981 06:58:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:34.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:34.981 06:58:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:34.981 06:58:39 -- common/autotest_common.sh@10 -- # set +x 00:32:34.981 [2024-04-17 06:58:39.501550] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:32:34.981 [2024-04-17 06:58:39.501613] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:34.981 EAL: No free 2048 kB hugepages reported on node 1 00:32:34.981 [2024-04-17 06:58:39.571780] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:35.239 [2024-04-17 06:58:39.663479] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:35.239 [2024-04-17 06:58:39.663533] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:35.239 [2024-04-17 06:58:39.663548] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:35.239 [2024-04-17 06:58:39.663559] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:35.239 [2024-04-17 06:58:39.663569] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:35.239 [2024-04-17 06:58:39.663638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:35.239 [2024-04-17 06:58:39.663661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:35.239 [2024-04-17 06:58:39.663727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:35.239 [2024-04-17 06:58:39.663730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:35.239 06:58:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:35.239 06:58:39 -- common/autotest_common.sh@850 -- # return 0 00:32:35.239 06:58:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:32:35.240 06:58:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:35.240 06:58:39 -- common/autotest_common.sh@10 -- # set +x 00:32:35.240 06:58:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:35.240 06:58:39 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:35.240 06:58:39 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:35.240 06:58:39 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:35.240 06:58:39 -- scripts/common.sh@309 -- # local bdf bdfs 00:32:35.240 06:58:39 -- scripts/common.sh@310 -- # local nvmes 00:32:35.240 06:58:39 -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:32:35.240 06:58:39 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:35.240 06:58:39 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:32:35.240 06:58:39 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:32:35.240 06:58:39 -- scripts/common.sh@320 -- # uname -s 00:32:35.240 06:58:39 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:32:35.240 06:58:39 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:32:35.240 06:58:39 -- scripts/common.sh@325 -- # (( 1 )) 00:32:35.240 06:58:39 -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:32:35.240 06:58:39 -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:35.240 06:58:39 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:32:35.240 06:58:39 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:35.240 06:58:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:35.240 06:58:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:35.240 06:58:39 -- common/autotest_common.sh@10 -- # set +x 00:32:35.498 ************************************ 00:32:35.498 START TEST spdk_target_abort 00:32:35.498 ************************************ 00:32:35.498 06:58:39 -- common/autotest_common.sh@1111 -- # spdk_target 00:32:35.498 06:58:39 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:35.498 06:58:39 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:32:35.498 06:58:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:35.498 06:58:39 -- common/autotest_common.sh@10 -- # set +x 00:32:38.777 spdk_targetn1 00:32:38.777 06:58:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.777 06:58:42 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:38.777 06:58:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:38.777 06:58:42 -- common/autotest_common.sh@10 -- # set +x 00:32:38.777 [2024-04-17 06:58:42.764935] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:38.777 06:58:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.777 06:58:42 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:38.777 06:58:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:38.777 06:58:42 -- common/autotest_common.sh@10 -- # set +x 00:32:38.777 06:58:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.777 06:58:42 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:38.777 06:58:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:38.777 06:58:42 -- common/autotest_common.sh@10 -- # set +x 00:32:38.777 06:58:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.777 06:58:42 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:38.777 06:58:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:38.777 06:58:42 -- common/autotest_common.sh@10 -- # set +x 00:32:38.778 [2024-04-17 06:58:42.797197] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:38.778 06:58:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.778 06:58:42 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:38.778 06:58:42 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:38.778 06:58:42 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:38.778 06:58:42 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:38.778 06:58:42 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:38.778 06:58:42 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:38.778 06:58:42 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:38.778 06:58:42 -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:38.778 06:58:42 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:38.778 06:58:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:38.778 06:58:42 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:38.778 06:58:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:38.778 06:58:42 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:38.778 06:58:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:38.778 06:58:42 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:38.778 06:58:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:38.778 06:58:42 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:38.778 06:58:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:38.778 06:58:42 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:38.778 06:58:42 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:38.778 06:58:42 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:38.778 EAL: No free 2048 kB hugepages reported on node 1 00:32:42.056 Initializing NVMe Controllers 00:32:42.056 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:42.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:42.056 Initialization complete. Launching workers. 00:32:42.056 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10110, failed: 0 00:32:42.056 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1711, failed to submit 8399 00:32:42.056 success 802, unsuccess 909, failed 0 00:32:42.056 06:58:45 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:42.056 06:58:45 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:42.056 EAL: No free 2048 kB hugepages reported on node 1 00:32:45.334 Initializing NVMe Controllers 00:32:45.334 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:45.334 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:45.334 Initialization complete. Launching workers. 00:32:45.334 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8630, failed: 0 00:32:45.334 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1253, failed to submit 7377 00:32:45.334 success 299, unsuccess 954, failed 0 00:32:45.334 06:58:49 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:45.334 06:58:49 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:45.334 EAL: No free 2048 kB hugepages reported on node 1 00:32:47.892 Initializing NVMe Controllers 00:32:47.892 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:47.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:47.892 Initialization complete. Launching workers. 00:32:47.892 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30908, failed: 0 00:32:47.892 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2648, failed to submit 28260 00:32:47.892 success 517, unsuccess 2131, failed 0 00:32:47.892 06:58:52 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:47.892 06:58:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:47.892 06:58:52 -- common/autotest_common.sh@10 -- # set +x 00:32:47.892 06:58:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:47.892 06:58:52 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:47.892 06:58:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:47.892 06:58:52 -- common/autotest_common.sh@10 -- # set +x 00:32:49.262 06:58:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:49.262 06:58:53 -- target/abort_qd_sizes.sh@61 -- # killprocess 144434 00:32:49.262 06:58:53 -- common/autotest_common.sh@936 -- # '[' -z 144434 ']' 00:32:49.262 06:58:53 -- common/autotest_common.sh@940 -- # kill -0 144434 00:32:49.262 06:58:53 -- common/autotest_common.sh@941 -- # uname 00:32:49.262 06:58:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:49.262 06:58:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 144434 00:32:49.262 06:58:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:49.262 06:58:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:49.262 06:58:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 144434' 00:32:49.262 killing process with pid 144434 00:32:49.262 06:58:53 -- common/autotest_common.sh@955 -- # kill 144434 00:32:49.262 06:58:53 -- common/autotest_common.sh@960 -- # wait 144434 00:32:49.520 00:32:49.520 real 0m14.094s 00:32:49.520 user 0m52.658s 00:32:49.520 sys 0m2.971s 00:32:49.520 06:58:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:49.520 06:58:54 -- common/autotest_common.sh@10 -- # set +x 00:32:49.520 ************************************ 00:32:49.520 END TEST spdk_target_abort 00:32:49.520 ************************************ 00:32:49.520 06:58:54 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:49.520 06:58:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:49.520 06:58:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:49.520 06:58:54 -- common/autotest_common.sh@10 -- # set +x 00:32:49.777 ************************************ 00:32:49.777 START TEST kernel_target_abort 00:32:49.777 ************************************ 00:32:49.777 06:58:54 -- common/autotest_common.sh@1111 -- # kernel_target 00:32:49.777 06:58:54 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:49.777 06:58:54 -- nvmf/common.sh@717 -- # local ip 00:32:49.777 06:58:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:49.777 06:58:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:49.777 06:58:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.777 06:58:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.777 06:58:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:49.777 06:58:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.777 06:58:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:49.777 06:58:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:49.777 06:58:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:49.777 06:58:54 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:49.777 06:58:54 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:49.777 06:58:54 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:32:49.777 06:58:54 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:49.777 06:58:54 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:49.777 06:58:54 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:49.777 06:58:54 -- nvmf/common.sh@628 -- # local block nvme 00:32:49.777 06:58:54 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:32:49.777 06:58:54 -- nvmf/common.sh@631 -- # modprobe nvmet 00:32:49.777 06:58:54 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:49.777 06:58:54 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:50.710 Waiting for block devices as requested 00:32:50.710 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:50.968 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:50.968 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:50.968 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:50.968 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:51.227 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:51.227 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:51.227 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:51.227 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:51.485 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:51.485 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:51.485 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:51.485 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:51.742 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:51.742 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:51.742 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:51.742 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:52.000 06:58:56 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:32:52.000 06:58:56 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:52.000 06:58:56 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:32:52.000 06:58:56 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:32:52.000 06:58:56 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:52.000 06:58:56 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:32:52.000 06:58:56 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:32:52.000 06:58:56 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:52.000 06:58:56 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:52.000 No valid GPT data, bailing 00:32:52.000 06:58:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:52.000 06:58:56 -- scripts/common.sh@391 -- # pt= 00:32:52.000 06:58:56 -- scripts/common.sh@392 -- # return 1 00:32:52.000 06:58:56 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:32:52.000 06:58:56 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:32:52.000 06:58:56 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:52.000 06:58:56 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:52.000 06:58:56 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:52.000 06:58:56 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:52.000 06:58:56 -- nvmf/common.sh@656 -- # echo 1 00:32:52.000 06:58:56 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:32:52.000 06:58:56 -- nvmf/common.sh@658 -- # echo 1 00:32:52.000 06:58:56 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:32:52.000 06:58:56 -- nvmf/common.sh@661 -- # echo tcp 00:32:52.000 06:58:56 -- nvmf/common.sh@662 -- # echo 4420 00:32:52.000 06:58:56 -- nvmf/common.sh@663 -- # echo ipv4 00:32:52.000 06:58:56 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:52.000 06:58:56 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:52.000 00:32:52.000 Discovery Log Number of Records 2, Generation counter 2 00:32:52.000 =====Discovery Log Entry 0====== 00:32:52.000 trtype: tcp 00:32:52.000 adrfam: ipv4 00:32:52.000 subtype: current discovery subsystem 00:32:52.000 treq: not specified, sq flow control disable supported 00:32:52.000 portid: 1 00:32:52.000 trsvcid: 4420 00:32:52.000 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:52.000 traddr: 10.0.0.1 00:32:52.000 eflags: none 00:32:52.000 sectype: none 00:32:52.000 =====Discovery Log Entry 1====== 00:32:52.000 trtype: tcp 00:32:52.000 adrfam: ipv4 00:32:52.000 subtype: nvme subsystem 00:32:52.000 treq: not specified, sq flow control disable supported 00:32:52.001 portid: 1 00:32:52.001 trsvcid: 4420 00:32:52.001 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:52.001 traddr: 10.0.0.1 00:32:52.001 eflags: none 00:32:52.001 sectype: none 00:32:52.001 06:58:56 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:52.001 06:58:56 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:52.001 06:58:56 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:52.001 06:58:56 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:52.001 06:58:56 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:52.001 06:58:56 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:52.001 06:58:56 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:52.001 06:58:56 -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:52.001 06:58:56 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:52.001 06:58:56 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:52.001 06:58:56 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:52.001 06:58:56 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:52.001 06:58:56 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:52.001 06:58:56 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:52.001 06:58:56 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:52.001 06:58:56 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:52.001 06:58:56 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:52.001 06:58:56 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:52.001 06:58:56 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:52.001 06:58:56 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:52.001 06:58:56 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:52.001 EAL: No free 2048 kB hugepages reported on node 1 00:32:55.281 Initializing NVMe Controllers 00:32:55.281 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:55.281 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:55.281 Initialization complete. Launching workers. 00:32:55.281 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31566, failed: 0 00:32:55.281 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31566, failed to submit 0 00:32:55.281 success 0, unsuccess 31566, failed 0 00:32:55.281 06:58:59 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:55.281 06:58:59 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:55.281 EAL: No free 2048 kB hugepages reported on node 1 00:32:58.561 Initializing NVMe Controllers 00:32:58.561 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:58.561 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:58.561 Initialization complete. Launching workers. 00:32:58.561 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 62345, failed: 0 00:32:58.561 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15722, failed to submit 46623 00:32:58.561 success 0, unsuccess 15722, failed 0 00:32:58.561 06:59:02 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:58.561 06:59:02 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:58.561 EAL: No free 2048 kB hugepages reported on node 1 00:33:01.841 Initializing NVMe Controllers 00:33:01.841 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:01.841 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:01.841 Initialization complete. Launching workers. 00:33:01.841 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63234, failed: 0 00:33:01.841 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15766, failed to submit 47468 00:33:01.841 success 0, unsuccess 15766, failed 0 00:33:01.841 06:59:05 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:33:01.841 06:59:05 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:01.841 06:59:05 -- nvmf/common.sh@675 -- # echo 0 00:33:01.841 06:59:05 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:01.841 06:59:05 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:01.841 06:59:05 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:01.841 06:59:05 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:01.841 06:59:05 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:33:01.841 06:59:05 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:33:01.841 06:59:05 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:02.408 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:02.408 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:02.408 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:02.408 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:02.408 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:02.409 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:02.409 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:02.409 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:02.409 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:02.409 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:02.409 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:02.409 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:02.409 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:02.409 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:02.409 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:02.409 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:03.344 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:03.603 00:33:03.603 real 0m13.913s 00:33:03.603 user 0m4.936s 00:33:03.603 sys 0m3.279s 00:33:03.603 06:59:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:03.603 06:59:08 -- common/autotest_common.sh@10 -- # set +x 00:33:03.603 ************************************ 00:33:03.603 END TEST kernel_target_abort 00:33:03.603 ************************************ 00:33:03.603 06:59:08 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:03.603 06:59:08 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:33:03.603 06:59:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:33:03.603 06:59:08 -- nvmf/common.sh@117 -- # sync 00:33:03.603 06:59:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:03.603 06:59:08 -- nvmf/common.sh@120 -- # set +e 00:33:03.603 06:59:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:03.603 06:59:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:03.603 rmmod nvme_tcp 00:33:03.603 rmmod nvme_fabrics 00:33:03.603 rmmod nvme_keyring 00:33:03.603 06:59:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:03.603 06:59:08 -- nvmf/common.sh@124 -- # set -e 00:33:03.603 06:59:08 -- nvmf/common.sh@125 -- # return 0 00:33:03.603 06:59:08 -- nvmf/common.sh@478 -- # '[' -n 144434 ']' 00:33:03.603 06:59:08 -- nvmf/common.sh@479 -- # killprocess 144434 00:33:03.603 06:59:08 -- common/autotest_common.sh@936 -- # '[' -z 144434 ']' 00:33:03.603 06:59:08 -- common/autotest_common.sh@940 -- # kill -0 144434 00:33:03.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (144434) - No such process 00:33:03.603 06:59:08 -- common/autotest_common.sh@963 -- # echo 'Process with pid 144434 is not found' 00:33:03.603 Process with pid 144434 is not found 00:33:03.603 06:59:08 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:33:03.603 06:59:08 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:04.539 Waiting for block devices as requested 00:33:04.539 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:04.797 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:04.798 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:04.798 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:05.056 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:05.056 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:05.056 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:05.056 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:05.056 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:05.315 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:05.315 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:05.315 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:05.574 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:05.574 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:05.574 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:05.574 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:05.832 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:05.832 06:59:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:33:05.832 06:59:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:33:05.832 06:59:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:05.832 06:59:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:05.832 06:59:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:05.832 06:59:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:05.832 06:59:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.732 06:59:12 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:07.991 00:33:07.991 real 0m37.229s 00:33:07.991 user 0m59.694s 00:33:07.991 sys 0m9.397s 00:33:07.991 06:59:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:07.991 06:59:12 -- common/autotest_common.sh@10 -- # set +x 00:33:07.991 ************************************ 00:33:07.991 END TEST nvmf_abort_qd_sizes 00:33:07.991 ************************************ 00:33:07.991 06:59:12 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:07.991 06:59:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:07.991 06:59:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:07.991 06:59:12 -- common/autotest_common.sh@10 -- # set +x 00:33:07.991 ************************************ 00:33:07.991 START TEST keyring_file 00:33:07.991 ************************************ 00:33:07.991 06:59:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:07.991 * Looking for test storage... 00:33:07.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:07.991 06:59:12 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:07.991 06:59:12 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:07.991 06:59:12 -- nvmf/common.sh@7 -- # uname -s 00:33:07.991 06:59:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:07.991 06:59:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:07.992 06:59:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:07.992 06:59:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:07.992 06:59:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:07.992 06:59:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:07.992 06:59:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:07.992 06:59:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:07.992 06:59:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:07.992 06:59:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:07.992 06:59:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:07.992 06:59:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:07.992 06:59:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:07.992 06:59:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:07.992 06:59:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:07.992 06:59:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:07.992 06:59:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:07.992 06:59:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:07.992 06:59:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:07.992 06:59:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:07.992 06:59:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.992 06:59:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.992 06:59:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.992 06:59:12 -- paths/export.sh@5 -- # export PATH 00:33:07.992 06:59:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.992 06:59:12 -- nvmf/common.sh@47 -- # : 0 00:33:07.992 06:59:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:07.992 06:59:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:07.992 06:59:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:07.992 06:59:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:07.992 06:59:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:07.992 06:59:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:07.992 06:59:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:07.992 06:59:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:07.992 06:59:12 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:07.992 06:59:12 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:07.992 06:59:12 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:07.992 06:59:12 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:33:07.992 06:59:12 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:33:07.992 06:59:12 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:33:07.992 06:59:12 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:07.992 06:59:12 -- keyring/common.sh@15 -- # local name key digest path 00:33:07.992 06:59:12 -- keyring/common.sh@17 -- # name=key0 00:33:07.992 06:59:12 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:07.992 06:59:12 -- keyring/common.sh@17 -- # digest=0 00:33:07.992 06:59:12 -- keyring/common.sh@18 -- # mktemp 00:33:07.992 06:59:12 -- keyring/common.sh@18 -- # path=/tmp/tmp.jd27M8BXW8 00:33:07.992 06:59:12 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:07.992 06:59:12 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:07.992 06:59:12 -- nvmf/common.sh@691 -- # local prefix key digest 00:33:07.992 06:59:12 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:33:07.992 06:59:12 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:33:07.992 06:59:12 -- nvmf/common.sh@693 -- # digest=0 00:33:07.992 06:59:12 -- nvmf/common.sh@694 -- # python - 00:33:07.992 06:59:12 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.jd27M8BXW8 00:33:07.992 06:59:12 -- keyring/common.sh@23 -- # echo /tmp/tmp.jd27M8BXW8 00:33:07.992 06:59:12 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.jd27M8BXW8 00:33:07.992 06:59:12 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:33:07.992 06:59:12 -- keyring/common.sh@15 -- # local name key digest path 00:33:07.992 06:59:12 -- keyring/common.sh@17 -- # name=key1 00:33:07.992 06:59:12 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:07.992 06:59:12 -- keyring/common.sh@17 -- # digest=0 00:33:07.992 06:59:12 -- keyring/common.sh@18 -- # mktemp 00:33:07.992 06:59:12 -- keyring/common.sh@18 -- # path=/tmp/tmp.RX1ZUHPAaS 00:33:07.992 06:59:12 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:07.992 06:59:12 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:07.992 06:59:12 -- nvmf/common.sh@691 -- # local prefix key digest 00:33:07.992 06:59:12 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:33:07.992 06:59:12 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:33:07.992 06:59:12 -- nvmf/common.sh@693 -- # digest=0 00:33:07.992 06:59:12 -- nvmf/common.sh@694 -- # python - 00:33:07.992 06:59:12 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.RX1ZUHPAaS 00:33:07.992 06:59:12 -- keyring/common.sh@23 -- # echo /tmp/tmp.RX1ZUHPAaS 00:33:07.992 06:59:12 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.RX1ZUHPAaS 00:33:07.992 06:59:12 -- keyring/file.sh@30 -- # tgtpid=150211 00:33:07.992 06:59:12 -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:07.992 06:59:12 -- keyring/file.sh@32 -- # waitforlisten 150211 00:33:07.992 06:59:12 -- common/autotest_common.sh@817 -- # '[' -z 150211 ']' 00:33:07.992 06:59:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:07.992 06:59:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:07.992 06:59:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:07.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:07.992 06:59:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:07.992 06:59:12 -- common/autotest_common.sh@10 -- # set +x 00:33:08.251 [2024-04-17 06:59:12.639550] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:33:08.251 [2024-04-17 06:59:12.639648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150211 ] 00:33:08.251 EAL: No free 2048 kB hugepages reported on node 1 00:33:08.251 [2024-04-17 06:59:12.699700] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.251 [2024-04-17 06:59:12.783431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:08.509 06:59:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:08.509 06:59:13 -- common/autotest_common.sh@850 -- # return 0 00:33:08.509 06:59:13 -- keyring/file.sh@33 -- # rpc_cmd 00:33:08.509 06:59:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:08.509 06:59:13 -- common/autotest_common.sh@10 -- # set +x 00:33:08.509 [2024-04-17 06:59:13.014687] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:08.509 null0 00:33:08.509 [2024-04-17 06:59:13.046745] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:08.509 [2024-04-17 06:59:13.047238] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:08.509 [2024-04-17 06:59:13.054763] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:33:08.509 06:59:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:08.509 06:59:13 -- keyring/file.sh@43 -- # bperfpid=150214 00:33:08.509 06:59:13 -- keyring/file.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:33:08.509 06:59:13 -- keyring/file.sh@45 -- # waitforlisten 150214 /var/tmp/bperf.sock 00:33:08.509 06:59:13 -- common/autotest_common.sh@817 -- # '[' -z 150214 ']' 00:33:08.509 06:59:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:08.509 06:59:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:08.509 06:59:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:08.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:08.509 06:59:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:08.509 06:59:13 -- common/autotest_common.sh@10 -- # set +x 00:33:08.509 [2024-04-17 06:59:13.096261] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:33:08.509 [2024-04-17 06:59:13.096342] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150214 ] 00:33:08.767 EAL: No free 2048 kB hugepages reported on node 1 00:33:08.767 [2024-04-17 06:59:13.158229] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.768 [2024-04-17 06:59:13.249274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:08.768 06:59:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:08.768 06:59:13 -- common/autotest_common.sh@850 -- # return 0 00:33:08.768 06:59:13 -- keyring/file.sh@46 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jd27M8BXW8 00:33:08.768 06:59:13 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jd27M8BXW8 00:33:09.025 06:59:13 -- keyring/file.sh@47 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.RX1ZUHPAaS 00:33:09.025 06:59:13 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.RX1ZUHPAaS 00:33:09.283 06:59:13 -- keyring/file.sh@48 -- # get_key key0 00:33:09.283 06:59:13 -- keyring/file.sh@48 -- # jq -r .path 00:33:09.283 06:59:13 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:09.283 06:59:13 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:09.283 06:59:13 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:09.541 06:59:14 -- keyring/file.sh@48 -- # [[ /tmp/tmp.jd27M8BXW8 == \/\t\m\p\/\t\m\p\.\j\d\2\7\M\8\B\X\W\8 ]] 00:33:09.541 06:59:14 -- keyring/file.sh@49 -- # get_key key1 00:33:09.541 06:59:14 -- keyring/file.sh@49 -- # jq -r .path 00:33:09.541 06:59:14 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:09.541 06:59:14 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:09.541 06:59:14 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:09.801 06:59:14 -- keyring/file.sh@49 -- # [[ /tmp/tmp.RX1ZUHPAaS == \/\t\m\p\/\t\m\p\.\R\X\1\Z\U\H\P\A\a\S ]] 00:33:09.801 06:59:14 -- keyring/file.sh@50 -- # get_refcnt key0 00:33:09.801 06:59:14 -- keyring/common.sh@12 -- # get_key key0 00:33:09.801 06:59:14 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:09.801 06:59:14 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:09.801 06:59:14 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:09.801 06:59:14 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:10.066 06:59:14 -- keyring/file.sh@50 -- # (( 1 == 1 )) 00:33:10.066 06:59:14 -- keyring/file.sh@51 -- # get_refcnt key1 00:33:10.066 06:59:14 -- keyring/common.sh@12 -- # get_key key1 00:33:10.066 06:59:14 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:10.066 06:59:14 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:10.066 06:59:14 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:10.066 06:59:14 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:10.323 06:59:14 -- keyring/file.sh@51 -- # (( 1 == 1 )) 00:33:10.323 06:59:14 -- keyring/file.sh@54 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:10.323 06:59:14 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:10.580 [2024-04-17 06:59:15.102065] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:10.580 nvme0n1 00:33:10.838 06:59:15 -- keyring/file.sh@56 -- # get_refcnt key0 00:33:10.838 06:59:15 -- keyring/common.sh@12 -- # get_key key0 00:33:10.838 06:59:15 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:10.838 06:59:15 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:10.838 06:59:15 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:10.838 06:59:15 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:10.838 06:59:15 -- keyring/file.sh@56 -- # (( 2 == 2 )) 00:33:10.838 06:59:15 -- keyring/file.sh@57 -- # get_refcnt key1 00:33:10.838 06:59:15 -- keyring/common.sh@12 -- # get_key key1 00:33:10.838 06:59:15 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:10.838 06:59:15 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:10.838 06:59:15 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:10.838 06:59:15 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:11.095 06:59:15 -- keyring/file.sh@57 -- # (( 1 == 1 )) 00:33:11.095 06:59:15 -- keyring/file.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:11.353 Running I/O for 1 seconds... 00:33:12.285 00:33:12.285 Latency(us) 00:33:12.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:12.285 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:12.285 nvme0n1 : 1.02 4625.35 18.07 0.00 0.00 27363.25 8398.32 38253.61 00:33:12.285 =================================================================================================================== 00:33:12.285 Total : 4625.35 18.07 0.00 0.00 27363.25 8398.32 38253.61 00:33:12.285 0 00:33:12.285 06:59:16 -- keyring/file.sh@61 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:12.285 06:59:16 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:12.544 06:59:17 -- keyring/file.sh@62 -- # get_refcnt key0 00:33:12.544 06:59:17 -- keyring/common.sh@12 -- # get_key key0 00:33:12.544 06:59:17 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:12.544 06:59:17 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:12.544 06:59:17 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:12.544 06:59:17 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:12.803 06:59:17 -- keyring/file.sh@62 -- # (( 1 == 1 )) 00:33:12.803 06:59:17 -- keyring/file.sh@63 -- # get_refcnt key1 00:33:12.803 06:59:17 -- keyring/common.sh@12 -- # get_key key1 00:33:12.803 06:59:17 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:12.803 06:59:17 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:12.803 06:59:17 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:12.803 06:59:17 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:13.061 06:59:17 -- keyring/file.sh@63 -- # (( 1 == 1 )) 00:33:13.061 06:59:17 -- keyring/file.sh@66 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:13.061 06:59:17 -- common/autotest_common.sh@638 -- # local es=0 00:33:13.061 06:59:17 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:13.061 06:59:17 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:33:13.061 06:59:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:13.061 06:59:17 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:33:13.061 06:59:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:13.062 06:59:17 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:13.062 06:59:17 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:13.320 [2024-04-17 06:59:17.785373] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:13.320 [2024-04-17 06:59:17.785672] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c28440 (107): Transport endpoint is not connected 00:33:13.320 [2024-04-17 06:59:17.786662] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c28440 (9): Bad file descriptor 00:33:13.320 [2024-04-17 06:59:17.787660] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:13.320 [2024-04-17 06:59:17.787684] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:13.320 [2024-04-17 06:59:17.787699] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:13.320 request: 00:33:13.320 { 00:33:13.320 "name": "nvme0", 00:33:13.320 "trtype": "tcp", 00:33:13.320 "traddr": "127.0.0.1", 00:33:13.320 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:13.320 "adrfam": "ipv4", 00:33:13.320 "trsvcid": "4420", 00:33:13.320 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:13.320 "psk": "key1", 00:33:13.320 "method": "bdev_nvme_attach_controller", 00:33:13.320 "req_id": 1 00:33:13.320 } 00:33:13.320 Got JSON-RPC error response 00:33:13.320 response: 00:33:13.320 { 00:33:13.320 "code": -32602, 00:33:13.320 "message": "Invalid parameters" 00:33:13.320 } 00:33:13.320 06:59:17 -- common/autotest_common.sh@641 -- # es=1 00:33:13.320 06:59:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:13.320 06:59:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:13.320 06:59:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:13.320 06:59:17 -- keyring/file.sh@68 -- # get_refcnt key0 00:33:13.320 06:59:17 -- keyring/common.sh@12 -- # get_key key0 00:33:13.320 06:59:17 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:13.320 06:59:17 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:13.320 06:59:17 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:13.320 06:59:17 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:13.577 06:59:18 -- keyring/file.sh@68 -- # (( 1 == 1 )) 00:33:13.577 06:59:18 -- keyring/file.sh@69 -- # get_refcnt key1 00:33:13.577 06:59:18 -- keyring/common.sh@12 -- # get_key key1 00:33:13.577 06:59:18 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:13.577 06:59:18 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:13.577 06:59:18 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:13.577 06:59:18 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:13.835 06:59:18 -- keyring/file.sh@69 -- # (( 1 == 1 )) 00:33:13.835 06:59:18 -- keyring/file.sh@72 -- # bperf_cmd keyring_file_remove_key key0 00:33:13.835 06:59:18 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:14.093 06:59:18 -- keyring/file.sh@73 -- # bperf_cmd keyring_file_remove_key key1 00:33:14.093 06:59:18 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:14.350 06:59:18 -- keyring/file.sh@74 -- # bperf_cmd keyring_get_keys 00:33:14.350 06:59:18 -- keyring/file.sh@74 -- # jq length 00:33:14.350 06:59:18 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:14.607 06:59:19 -- keyring/file.sh@74 -- # (( 0 == 0 )) 00:33:14.607 06:59:19 -- keyring/file.sh@77 -- # chmod 0660 /tmp/tmp.jd27M8BXW8 00:33:14.607 06:59:19 -- keyring/file.sh@78 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.jd27M8BXW8 00:33:14.607 06:59:19 -- common/autotest_common.sh@638 -- # local es=0 00:33:14.607 06:59:19 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.jd27M8BXW8 00:33:14.607 06:59:19 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:33:14.607 06:59:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:14.607 06:59:19 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:33:14.607 06:59:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:14.607 06:59:19 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jd27M8BXW8 00:33:14.607 06:59:19 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jd27M8BXW8 00:33:14.865 [2024-04-17 06:59:19.260541] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.jd27M8BXW8': 0100660 00:33:14.865 [2024-04-17 06:59:19.260579] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:14.865 request: 00:33:14.865 { 00:33:14.865 "name": "key0", 00:33:14.865 "path": "/tmp/tmp.jd27M8BXW8", 00:33:14.865 "method": "keyring_file_add_key", 00:33:14.865 "req_id": 1 00:33:14.865 } 00:33:14.865 Got JSON-RPC error response 00:33:14.865 response: 00:33:14.865 { 00:33:14.865 "code": -1, 00:33:14.865 "message": "Operation not permitted" 00:33:14.865 } 00:33:14.865 06:59:19 -- common/autotest_common.sh@641 -- # es=1 00:33:14.865 06:59:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:14.865 06:59:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:14.865 06:59:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:14.865 06:59:19 -- keyring/file.sh@81 -- # chmod 0600 /tmp/tmp.jd27M8BXW8 00:33:14.865 06:59:19 -- keyring/file.sh@82 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jd27M8BXW8 00:33:14.865 06:59:19 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jd27M8BXW8 00:33:15.123 06:59:19 -- keyring/file.sh@83 -- # rm -f /tmp/tmp.jd27M8BXW8 00:33:15.123 06:59:19 -- keyring/file.sh@85 -- # get_refcnt key0 00:33:15.123 06:59:19 -- keyring/common.sh@12 -- # get_key key0 00:33:15.123 06:59:19 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:15.123 06:59:19 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:15.123 06:59:19 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:15.123 06:59:19 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:15.381 06:59:19 -- keyring/file.sh@85 -- # (( 1 == 1 )) 00:33:15.381 06:59:19 -- keyring/file.sh@87 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:15.381 06:59:19 -- common/autotest_common.sh@638 -- # local es=0 00:33:15.381 06:59:19 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:15.381 06:59:19 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:33:15.381 06:59:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:15.381 06:59:19 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:33:15.381 06:59:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:15.381 06:59:19 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:15.381 06:59:19 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:15.638 [2024-04-17 06:59:20.006577] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.jd27M8BXW8': No such file or directory 00:33:15.638 [2024-04-17 06:59:20.006617] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:15.638 [2024-04-17 06:59:20.006655] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:15.638 [2024-04-17 06:59:20.006669] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:15.638 [2024-04-17 06:59:20.006682] bdev_nvme.c:6191:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:15.638 request: 00:33:15.638 { 00:33:15.638 "name": "nvme0", 00:33:15.638 "trtype": "tcp", 00:33:15.638 "traddr": "127.0.0.1", 00:33:15.638 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:15.638 "adrfam": "ipv4", 00:33:15.638 "trsvcid": "4420", 00:33:15.638 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:15.638 "psk": "key0", 00:33:15.638 "method": "bdev_nvme_attach_controller", 00:33:15.638 "req_id": 1 00:33:15.638 } 00:33:15.638 Got JSON-RPC error response 00:33:15.638 response: 00:33:15.638 { 00:33:15.638 "code": -19, 00:33:15.638 "message": "No such device" 00:33:15.638 } 00:33:15.638 06:59:20 -- common/autotest_common.sh@641 -- # es=1 00:33:15.638 06:59:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:15.638 06:59:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:15.638 06:59:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:15.638 06:59:20 -- keyring/file.sh@89 -- # bperf_cmd keyring_file_remove_key key0 00:33:15.638 06:59:20 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:15.896 06:59:20 -- keyring/file.sh@92 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:15.896 06:59:20 -- keyring/common.sh@15 -- # local name key digest path 00:33:15.896 06:59:20 -- keyring/common.sh@17 -- # name=key0 00:33:15.896 06:59:20 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:15.896 06:59:20 -- keyring/common.sh@17 -- # digest=0 00:33:15.896 06:59:20 -- keyring/common.sh@18 -- # mktemp 00:33:15.896 06:59:20 -- keyring/common.sh@18 -- # path=/tmp/tmp.N0TNg804Nn 00:33:15.896 06:59:20 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:15.896 06:59:20 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:15.896 06:59:20 -- nvmf/common.sh@691 -- # local prefix key digest 00:33:15.896 06:59:20 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:33:15.896 06:59:20 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:33:15.896 06:59:20 -- nvmf/common.sh@693 -- # digest=0 00:33:15.896 06:59:20 -- nvmf/common.sh@694 -- # python - 00:33:15.896 06:59:20 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.N0TNg804Nn 00:33:15.896 06:59:20 -- keyring/common.sh@23 -- # echo /tmp/tmp.N0TNg804Nn 00:33:15.896 06:59:20 -- keyring/file.sh@92 -- # key0path=/tmp/tmp.N0TNg804Nn 00:33:15.896 06:59:20 -- keyring/file.sh@93 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.N0TNg804Nn 00:33:15.896 06:59:20 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.N0TNg804Nn 00:33:16.154 06:59:20 -- keyring/file.sh@94 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:16.154 06:59:20 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:16.414 nvme0n1 00:33:16.414 06:59:20 -- keyring/file.sh@96 -- # get_refcnt key0 00:33:16.414 06:59:20 -- keyring/common.sh@12 -- # get_key key0 00:33:16.414 06:59:20 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:16.414 06:59:20 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:16.414 06:59:20 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:16.414 06:59:20 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:16.681 06:59:21 -- keyring/file.sh@96 -- # (( 2 == 2 )) 00:33:16.681 06:59:21 -- keyring/file.sh@97 -- # bperf_cmd keyring_file_remove_key key0 00:33:16.681 06:59:21 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:16.939 06:59:21 -- keyring/file.sh@98 -- # get_key key0 00:33:16.939 06:59:21 -- keyring/file.sh@98 -- # jq -r .removed 00:33:16.939 06:59:21 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:16.939 06:59:21 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:16.939 06:59:21 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:17.197 06:59:21 -- keyring/file.sh@98 -- # [[ true == \t\r\u\e ]] 00:33:17.197 06:59:21 -- keyring/file.sh@99 -- # get_refcnt key0 00:33:17.197 06:59:21 -- keyring/common.sh@12 -- # get_key key0 00:33:17.197 06:59:21 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:17.197 06:59:21 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:17.197 06:59:21 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:17.197 06:59:21 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:17.455 06:59:21 -- keyring/file.sh@99 -- # (( 1 == 1 )) 00:33:17.455 06:59:21 -- keyring/file.sh@100 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:17.455 06:59:21 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:17.712 06:59:22 -- keyring/file.sh@101 -- # bperf_cmd keyring_get_keys 00:33:17.712 06:59:22 -- keyring/file.sh@101 -- # jq length 00:33:17.712 06:59:22 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:17.970 06:59:22 -- keyring/file.sh@101 -- # (( 0 == 0 )) 00:33:17.970 06:59:22 -- keyring/file.sh@104 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.N0TNg804Nn 00:33:17.970 06:59:22 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.N0TNg804Nn 00:33:18.228 06:59:22 -- keyring/file.sh@105 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.RX1ZUHPAaS 00:33:18.228 06:59:22 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.RX1ZUHPAaS 00:33:18.485 06:59:22 -- keyring/file.sh@106 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:18.485 06:59:22 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:18.743 nvme0n1 00:33:18.743 06:59:23 -- keyring/file.sh@109 -- # bperf_cmd save_config 00:33:18.743 06:59:23 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:19.001 06:59:23 -- keyring/file.sh@109 -- # config='{ 00:33:19.001 "subsystems": [ 00:33:19.001 { 00:33:19.001 "subsystem": "keyring", 00:33:19.001 "config": [ 00:33:19.001 { 00:33:19.001 "method": "keyring_file_add_key", 00:33:19.001 "params": { 00:33:19.001 "name": "key0", 00:33:19.001 "path": "/tmp/tmp.N0TNg804Nn" 00:33:19.001 } 00:33:19.001 }, 00:33:19.001 { 00:33:19.001 "method": "keyring_file_add_key", 00:33:19.001 "params": { 00:33:19.001 "name": "key1", 00:33:19.001 "path": "/tmp/tmp.RX1ZUHPAaS" 00:33:19.001 } 00:33:19.001 } 00:33:19.001 ] 00:33:19.001 }, 00:33:19.001 { 00:33:19.001 "subsystem": "iobuf", 00:33:19.001 "config": [ 00:33:19.001 { 00:33:19.001 "method": "iobuf_set_options", 00:33:19.001 "params": { 00:33:19.001 "small_pool_count": 8192, 00:33:19.001 "large_pool_count": 1024, 00:33:19.001 "small_bufsize": 8192, 00:33:19.001 "large_bufsize": 135168 00:33:19.001 } 00:33:19.001 } 00:33:19.001 ] 00:33:19.001 }, 00:33:19.001 { 00:33:19.001 "subsystem": "sock", 00:33:19.001 "config": [ 00:33:19.001 { 00:33:19.001 "method": "sock_impl_set_options", 00:33:19.001 "params": { 00:33:19.001 "impl_name": "posix", 00:33:19.001 "recv_buf_size": 2097152, 00:33:19.001 "send_buf_size": 2097152, 00:33:19.001 "enable_recv_pipe": true, 00:33:19.001 "enable_quickack": false, 00:33:19.001 "enable_placement_id": 0, 00:33:19.001 "enable_zerocopy_send_server": true, 00:33:19.001 "enable_zerocopy_send_client": false, 00:33:19.001 "zerocopy_threshold": 0, 00:33:19.001 "tls_version": 0, 00:33:19.001 "enable_ktls": false 00:33:19.001 } 00:33:19.001 }, 00:33:19.001 { 00:33:19.001 "method": "sock_impl_set_options", 00:33:19.001 "params": { 00:33:19.001 "impl_name": "ssl", 00:33:19.001 "recv_buf_size": 4096, 00:33:19.001 "send_buf_size": 4096, 00:33:19.001 "enable_recv_pipe": true, 00:33:19.001 "enable_quickack": false, 00:33:19.001 "enable_placement_id": 0, 00:33:19.001 "enable_zerocopy_send_server": true, 00:33:19.001 "enable_zerocopy_send_client": false, 00:33:19.001 "zerocopy_threshold": 0, 00:33:19.001 "tls_version": 0, 00:33:19.001 "enable_ktls": false 00:33:19.001 } 00:33:19.001 } 00:33:19.001 ] 00:33:19.001 }, 00:33:19.001 { 00:33:19.001 "subsystem": "vmd", 00:33:19.001 "config": [] 00:33:19.001 }, 00:33:19.001 { 00:33:19.001 "subsystem": "accel", 00:33:19.001 "config": [ 00:33:19.001 { 00:33:19.001 "method": "accel_set_options", 00:33:19.001 "params": { 00:33:19.001 "small_cache_size": 128, 00:33:19.001 "large_cache_size": 16, 00:33:19.001 "task_count": 2048, 00:33:19.001 "sequence_count": 2048, 00:33:19.001 "buf_count": 2048 00:33:19.001 } 00:33:19.001 } 00:33:19.001 ] 00:33:19.001 }, 00:33:19.001 { 00:33:19.001 "subsystem": "bdev", 00:33:19.001 "config": [ 00:33:19.001 { 00:33:19.001 "method": "bdev_set_options", 00:33:19.001 "params": { 00:33:19.001 "bdev_io_pool_size": 65535, 00:33:19.001 "bdev_io_cache_size": 256, 00:33:19.001 "bdev_auto_examine": true, 00:33:19.001 "iobuf_small_cache_size": 128, 00:33:19.001 "iobuf_large_cache_size": 16 00:33:19.001 } 00:33:19.002 }, 00:33:19.002 { 00:33:19.002 "method": "bdev_raid_set_options", 00:33:19.002 "params": { 00:33:19.002 "process_window_size_kb": 1024 00:33:19.002 } 00:33:19.002 }, 00:33:19.002 { 00:33:19.002 "method": "bdev_iscsi_set_options", 00:33:19.002 "params": { 00:33:19.002 "timeout_sec": 30 00:33:19.002 } 00:33:19.002 }, 00:33:19.002 { 00:33:19.002 "method": "bdev_nvme_set_options", 00:33:19.002 "params": { 00:33:19.002 "action_on_timeout": "none", 00:33:19.002 "timeout_us": 0, 00:33:19.002 "timeout_admin_us": 0, 00:33:19.002 "keep_alive_timeout_ms": 10000, 00:33:19.002 "arbitration_burst": 0, 00:33:19.002 "low_priority_weight": 0, 00:33:19.002 "medium_priority_weight": 0, 00:33:19.002 "high_priority_weight": 0, 00:33:19.002 "nvme_adminq_poll_period_us": 10000, 00:33:19.002 "nvme_ioq_poll_period_us": 0, 00:33:19.002 "io_queue_requests": 512, 00:33:19.002 "delay_cmd_submit": true, 00:33:19.002 "transport_retry_count": 4, 00:33:19.002 "bdev_retry_count": 3, 00:33:19.002 "transport_ack_timeout": 0, 00:33:19.002 "ctrlr_loss_timeout_sec": 0, 00:33:19.002 "reconnect_delay_sec": 0, 00:33:19.002 "fast_io_fail_timeout_sec": 0, 00:33:19.002 "disable_auto_failback": false, 00:33:19.002 "generate_uuids": false, 00:33:19.002 "transport_tos": 0, 00:33:19.002 "nvme_error_stat": false, 00:33:19.002 "rdma_srq_size": 0, 00:33:19.002 "io_path_stat": false, 00:33:19.002 "allow_accel_sequence": false, 00:33:19.002 "rdma_max_cq_size": 0, 00:33:19.002 "rdma_cm_event_timeout_ms": 0, 00:33:19.002 "dhchap_digests": [ 00:33:19.002 "sha256", 00:33:19.002 "sha384", 00:33:19.002 "sha512" 00:33:19.002 ], 00:33:19.002 "dhchap_dhgroups": [ 00:33:19.002 "null", 00:33:19.002 "ffdhe2048", 00:33:19.002 "ffdhe3072", 00:33:19.002 "ffdhe4096", 00:33:19.002 "ffdhe6144", 00:33:19.002 "ffdhe8192" 00:33:19.002 ] 00:33:19.002 } 00:33:19.002 }, 00:33:19.002 { 00:33:19.002 "method": "bdev_nvme_attach_controller", 00:33:19.002 "params": { 00:33:19.002 "name": "nvme0", 00:33:19.002 "trtype": "TCP", 00:33:19.002 "adrfam": "IPv4", 00:33:19.002 "traddr": "127.0.0.1", 00:33:19.002 "trsvcid": "4420", 00:33:19.002 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:19.002 "prchk_reftag": false, 00:33:19.002 "prchk_guard": false, 00:33:19.002 "ctrlr_loss_timeout_sec": 0, 00:33:19.002 "reconnect_delay_sec": 0, 00:33:19.002 "fast_io_fail_timeout_sec": 0, 00:33:19.002 "psk": "key0", 00:33:19.002 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:19.002 "hdgst": false, 00:33:19.002 "ddgst": false 00:33:19.002 } 00:33:19.002 }, 00:33:19.002 { 00:33:19.002 "method": "bdev_nvme_set_hotplug", 00:33:19.002 "params": { 00:33:19.002 "period_us": 100000, 00:33:19.002 "enable": false 00:33:19.002 } 00:33:19.002 }, 00:33:19.002 { 00:33:19.002 "method": "bdev_wait_for_examine" 00:33:19.002 } 00:33:19.002 ] 00:33:19.002 }, 00:33:19.002 { 00:33:19.002 "subsystem": "nbd", 00:33:19.002 "config": [] 00:33:19.002 } 00:33:19.002 ] 00:33:19.002 }' 00:33:19.002 06:59:23 -- keyring/file.sh@111 -- # killprocess 150214 00:33:19.002 06:59:23 -- common/autotest_common.sh@936 -- # '[' -z 150214 ']' 00:33:19.002 06:59:23 -- common/autotest_common.sh@940 -- # kill -0 150214 00:33:19.002 06:59:23 -- common/autotest_common.sh@941 -- # uname 00:33:19.002 06:59:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:19.002 06:59:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 150214 00:33:19.002 06:59:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:33:19.002 06:59:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:33:19.002 06:59:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 150214' 00:33:19.002 killing process with pid 150214 00:33:19.002 06:59:23 -- common/autotest_common.sh@955 -- # kill 150214 00:33:19.002 Received shutdown signal, test time was about 1.000000 seconds 00:33:19.002 00:33:19.002 Latency(us) 00:33:19.002 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:19.002 =================================================================================================================== 00:33:19.002 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:19.002 06:59:23 -- common/autotest_common.sh@960 -- # wait 150214 00:33:19.262 06:59:23 -- keyring/file.sh@114 -- # bperfpid=151557 00:33:19.262 06:59:23 -- keyring/file.sh@116 -- # waitforlisten 151557 /var/tmp/bperf.sock 00:33:19.262 06:59:23 -- common/autotest_common.sh@817 -- # '[' -z 151557 ']' 00:33:19.262 06:59:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:19.262 06:59:23 -- keyring/file.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:19.262 06:59:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:19.262 06:59:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:19.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:19.262 06:59:23 -- keyring/file.sh@112 -- # echo '{ 00:33:19.262 "subsystems": [ 00:33:19.262 { 00:33:19.262 "subsystem": "keyring", 00:33:19.262 "config": [ 00:33:19.262 { 00:33:19.262 "method": "keyring_file_add_key", 00:33:19.262 "params": { 00:33:19.262 "name": "key0", 00:33:19.262 "path": "/tmp/tmp.N0TNg804Nn" 00:33:19.262 } 00:33:19.262 }, 00:33:19.262 { 00:33:19.262 "method": "keyring_file_add_key", 00:33:19.262 "params": { 00:33:19.262 "name": "key1", 00:33:19.262 "path": "/tmp/tmp.RX1ZUHPAaS" 00:33:19.262 } 00:33:19.262 } 00:33:19.262 ] 00:33:19.262 }, 00:33:19.262 { 00:33:19.262 "subsystem": "iobuf", 00:33:19.262 "config": [ 00:33:19.262 { 00:33:19.262 "method": "iobuf_set_options", 00:33:19.262 "params": { 00:33:19.262 "small_pool_count": 8192, 00:33:19.262 "large_pool_count": 1024, 00:33:19.262 "small_bufsize": 8192, 00:33:19.262 "large_bufsize": 135168 00:33:19.262 } 00:33:19.262 } 00:33:19.262 ] 00:33:19.262 }, 00:33:19.262 { 00:33:19.262 "subsystem": "sock", 00:33:19.262 "config": [ 00:33:19.262 { 00:33:19.262 "method": "sock_impl_set_options", 00:33:19.262 "params": { 00:33:19.262 "impl_name": "posix", 00:33:19.262 "recv_buf_size": 2097152, 00:33:19.262 "send_buf_size": 2097152, 00:33:19.262 "enable_recv_pipe": true, 00:33:19.262 "enable_quickack": false, 00:33:19.262 "enable_placement_id": 0, 00:33:19.262 "enable_zerocopy_send_server": true, 00:33:19.262 "enable_zerocopy_send_client": false, 00:33:19.262 "zerocopy_threshold": 0, 00:33:19.262 "tls_version": 0, 00:33:19.262 "enable_ktls": false 00:33:19.262 } 00:33:19.262 }, 00:33:19.262 { 00:33:19.262 "method": "sock_impl_set_options", 00:33:19.262 "params": { 00:33:19.262 "impl_name": "ssl", 00:33:19.262 "recv_buf_size": 4096, 00:33:19.262 "send_buf_size": 4096, 00:33:19.262 "enable_recv_pipe": true, 00:33:19.262 "enable_quickack": false, 00:33:19.262 "enable_placement_id": 0, 00:33:19.262 "enable_zerocopy_send_server": true, 00:33:19.262 "enable_zerocopy_send_client": false, 00:33:19.262 "zerocopy_threshold": 0, 00:33:19.262 "tls_version": 0, 00:33:19.262 "enable_ktls": false 00:33:19.262 } 00:33:19.262 } 00:33:19.262 ] 00:33:19.262 }, 00:33:19.262 { 00:33:19.262 "subsystem": "vmd", 00:33:19.262 "config": [] 00:33:19.262 }, 00:33:19.262 { 00:33:19.262 "subsystem": "accel", 00:33:19.262 "config": [ 00:33:19.262 { 00:33:19.262 "method": "accel_set_options", 00:33:19.262 "params": { 00:33:19.262 "small_cache_size": 128, 00:33:19.262 "large_cache_size": 16, 00:33:19.262 "task_count": 2048, 00:33:19.262 "sequence_count": 2048, 00:33:19.262 "buf_count": 2048 00:33:19.262 } 00:33:19.262 } 00:33:19.262 ] 00:33:19.262 }, 00:33:19.262 { 00:33:19.262 "subsystem": "bdev", 00:33:19.262 "config": [ 00:33:19.262 { 00:33:19.262 "method": "bdev_set_options", 00:33:19.262 "params": { 00:33:19.262 "bdev_io_pool_size": 65535, 00:33:19.262 "bdev_io_cache_size": 256, 00:33:19.262 "bdev_auto_examine": true, 00:33:19.262 "iobuf_small_cache_size": 128, 00:33:19.262 "iobuf_large_cache_size": 16 00:33:19.262 } 00:33:19.262 }, 00:33:19.262 { 00:33:19.262 "method": "bdev_raid_set_options", 00:33:19.262 "params": { 00:33:19.262 "process_window_size_kb": 1024 00:33:19.262 } 00:33:19.262 }, 00:33:19.262 { 00:33:19.262 "method": "bdev_iscsi_set_options", 00:33:19.262 "params": { 00:33:19.262 "timeout_sec": 30 00:33:19.262 } 00:33:19.262 }, 00:33:19.262 { 00:33:19.262 "method": "bdev_nvme_set_options", 00:33:19.262 "params": { 00:33:19.262 "action_on_timeout": "none", 00:33:19.262 "timeout_us": 0, 00:33:19.262 "timeout_admin_us": 0, 00:33:19.262 "keep_alive_timeout_ms": 10000, 00:33:19.262 "arbitration_burst": 0, 00:33:19.262 "low_priority_weight": 0, 00:33:19.262 "medium_priority_weight": 0, 00:33:19.262 "high_priority_weight": 0, 00:33:19.262 "nvme_adminq_poll_period_us": 10000, 00:33:19.262 "nvme_ioq_poll_period_us": 0, 00:33:19.262 "io_queue_requests": 512, 00:33:19.262 "delay_cmd_submit": true, 00:33:19.262 "transport_retry_count": 4, 00:33:19.262 "bdev_retry_count": 3, 00:33:19.262 "transport_ack_timeout": 0, 00:33:19.262 "ctrlr_loss_timeout_sec": 0, 00:33:19.262 "reconnect_delay_sec": 0, 00:33:19.262 "fast_io_fail_timeout_sec": 0, 00:33:19.262 "disable_auto_failback": false, 00:33:19.262 "generate_uuids": false, 00:33:19.262 "transport_tos": 0, 00:33:19.262 "nvme_error_stat": false, 00:33:19.262 "rdma_srq_size": 0, 00:33:19.262 "io_path_stat": false, 00:33:19.262 "allow_accel_sequence": false, 00:33:19.262 "rdma_max_cq_size": 0, 00:33:19.262 "rdma_cm_event_timeout_ms": 0, 00:33:19.262 "dhchap_digests": [ 00:33:19.262 "sha256", 00:33:19.262 "sha384", 00:33:19.262 "sha512" 00:33:19.262 ], 00:33:19.262 "dhchap_dhgroups": [ 00:33:19.262 "null", 00:33:19.262 "ffdhe2048", 00:33:19.262 "ffdhe3072", 00:33:19.262 "ffdhe4096", 00:33:19.262 "ffdhe6144", 00:33:19.262 "ffdhe8192" 00:33:19.262 ] 00:33:19.262 } 00:33:19.262 }, 00:33:19.262 { 00:33:19.262 "method": "bdev_nvme_attach_controller", 00:33:19.262 "params": { 00:33:19.262 "name": "nvme0", 00:33:19.262 "trtype": "TCP", 00:33:19.262 "adrfam": "IPv4", 00:33:19.262 "traddr": "127.0.0.1", 00:33:19.262 "trsvcid": "4420", 00:33:19.262 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:19.262 "prchk_reftag": false, 00:33:19.262 "prchk_guard": false, 00:33:19.262 "ctrlr_loss_timeout_sec": 0, 00:33:19.262 "reconnect_delay_sec": 0, 00:33:19.262 "fast_io_fail_timeout_sec": 0, 00:33:19.262 "psk": "key0", 00:33:19.262 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:19.262 "hdgst": false, 00:33:19.262 "ddgst": false 00:33:19.262 } 00:33:19.262 }, 00:33:19.262 { 00:33:19.262 "method": "bdev_nvme_set_hotplug", 00:33:19.262 "params": { 00:33:19.262 "period_us": 100000, 00:33:19.262 "enable": false 00:33:19.262 } 00:33:19.262 }, 00:33:19.262 { 00:33:19.262 "method": "bdev_wait_for_examine" 00:33:19.262 } 00:33:19.262 ] 00:33:19.262 }, 00:33:19.262 { 00:33:19.262 "subsystem": "nbd", 00:33:19.262 "config": [] 00:33:19.262 } 00:33:19.262 ] 00:33:19.262 }' 00:33:19.262 06:59:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:19.262 06:59:23 -- common/autotest_common.sh@10 -- # set +x 00:33:19.262 [2024-04-17 06:59:23.745975] Starting SPDK v24.05-pre git sha1 9c9f7ddbb / DPDK 23.11.0 initialization... 00:33:19.263 [2024-04-17 06:59:23.746071] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151557 ] 00:33:19.263 EAL: No free 2048 kB hugepages reported on node 1 00:33:19.263 [2024-04-17 06:59:23.808333] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:19.521 [2024-04-17 06:59:23.898689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:19.521 [2024-04-17 06:59:24.080743] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:20.087 06:59:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:20.087 06:59:24 -- common/autotest_common.sh@850 -- # return 0 00:33:20.087 06:59:24 -- keyring/file.sh@117 -- # bperf_cmd keyring_get_keys 00:33:20.087 06:59:24 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:20.087 06:59:24 -- keyring/file.sh@117 -- # jq length 00:33:20.364 06:59:24 -- keyring/file.sh@117 -- # (( 2 == 2 )) 00:33:20.364 06:59:24 -- keyring/file.sh@118 -- # get_refcnt key0 00:33:20.364 06:59:24 -- keyring/common.sh@12 -- # get_key key0 00:33:20.364 06:59:24 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:20.364 06:59:24 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:20.364 06:59:24 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:20.364 06:59:24 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:20.640 06:59:25 -- keyring/file.sh@118 -- # (( 2 == 2 )) 00:33:20.640 06:59:25 -- keyring/file.sh@119 -- # get_refcnt key1 00:33:20.640 06:59:25 -- keyring/common.sh@12 -- # get_key key1 00:33:20.640 06:59:25 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:20.640 06:59:25 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:20.640 06:59:25 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:20.640 06:59:25 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:20.905 06:59:25 -- keyring/file.sh@119 -- # (( 1 == 1 )) 00:33:20.905 06:59:25 -- keyring/file.sh@120 -- # bperf_cmd bdev_nvme_get_controllers 00:33:20.905 06:59:25 -- keyring/file.sh@120 -- # jq -r '.[].name' 00:33:20.905 06:59:25 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:21.162 06:59:25 -- keyring/file.sh@120 -- # [[ nvme0 == nvme0 ]] 00:33:21.162 06:59:25 -- keyring/file.sh@1 -- # cleanup 00:33:21.162 06:59:25 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.N0TNg804Nn /tmp/tmp.RX1ZUHPAaS 00:33:21.162 06:59:25 -- keyring/file.sh@20 -- # killprocess 151557 00:33:21.162 06:59:25 -- common/autotest_common.sh@936 -- # '[' -z 151557 ']' 00:33:21.162 06:59:25 -- common/autotest_common.sh@940 -- # kill -0 151557 00:33:21.162 06:59:25 -- common/autotest_common.sh@941 -- # uname 00:33:21.162 06:59:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:21.162 06:59:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 151557 00:33:21.162 06:59:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:33:21.162 06:59:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:33:21.162 06:59:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 151557' 00:33:21.162 killing process with pid 151557 00:33:21.162 06:59:25 -- common/autotest_common.sh@955 -- # kill 151557 00:33:21.162 Received shutdown signal, test time was about 1.000000 seconds 00:33:21.162 00:33:21.162 Latency(us) 00:33:21.162 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:21.162 =================================================================================================================== 00:33:21.162 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:21.162 06:59:25 -- common/autotest_common.sh@960 -- # wait 151557 00:33:21.420 06:59:25 -- keyring/file.sh@21 -- # killprocess 150211 00:33:21.420 06:59:25 -- common/autotest_common.sh@936 -- # '[' -z 150211 ']' 00:33:21.420 06:59:25 -- common/autotest_common.sh@940 -- # kill -0 150211 00:33:21.420 06:59:25 -- common/autotest_common.sh@941 -- # uname 00:33:21.420 06:59:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:21.420 06:59:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 150211 00:33:21.420 06:59:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:21.420 06:59:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:21.420 06:59:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 150211' 00:33:21.420 killing process with pid 150211 00:33:21.420 06:59:25 -- common/autotest_common.sh@955 -- # kill 150211 00:33:21.420 [2024-04-17 06:59:25.939556] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:33:21.420 06:59:25 -- common/autotest_common.sh@960 -- # wait 150211 00:33:21.986 00:33:21.986 real 0m13.890s 00:33:21.986 user 0m34.312s 00:33:21.986 sys 0m3.297s 00:33:21.986 06:59:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:21.986 06:59:26 -- common/autotest_common.sh@10 -- # set +x 00:33:21.986 ************************************ 00:33:21.986 END TEST keyring_file 00:33:21.986 ************************************ 00:33:21.986 06:59:26 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:33:21.986 06:59:26 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:33:21.986 06:59:26 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:33:21.986 06:59:26 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:33:21.986 06:59:26 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:33:21.986 06:59:26 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:33:21.987 06:59:26 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:33:21.987 06:59:26 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:33:21.987 06:59:26 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:33:21.987 06:59:26 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:33:21.987 06:59:26 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:21.987 06:59:26 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:33:21.987 06:59:26 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:33:21.987 06:59:26 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:33:21.987 06:59:26 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:33:21.987 06:59:26 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:33:21.987 06:59:26 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:33:21.987 06:59:26 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:33:21.987 06:59:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:33:21.987 06:59:26 -- common/autotest_common.sh@10 -- # set +x 00:33:21.987 06:59:26 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:33:21.987 06:59:26 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:33:21.987 06:59:26 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:33:21.987 06:59:26 -- common/autotest_common.sh@10 -- # set +x 00:33:23.889 INFO: APP EXITING 00:33:23.889 INFO: killing all VMs 00:33:23.889 INFO: killing vhost app 00:33:23.889 INFO: EXIT DONE 00:33:24.823 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:33:24.823 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:33:24.823 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:33:24.823 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:33:24.823 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:33:24.823 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:33:24.823 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:33:24.823 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:33:24.823 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:33:24.823 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:33:24.823 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:33:24.823 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:33:24.823 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:33:24.823 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:33:24.823 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:33:24.823 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:33:25.081 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:33:26.466 Cleaning 00:33:26.466 Removing: /var/run/dpdk/spdk0/config 00:33:26.466 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:26.466 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:26.466 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:26.466 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:26.466 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:26.466 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:26.466 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:26.466 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:26.466 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:26.466 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:26.466 Removing: /var/run/dpdk/spdk1/config 00:33:26.466 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:26.466 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:26.466 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:26.467 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:26.467 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:26.467 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:26.467 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:26.467 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:26.467 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:26.467 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:26.467 Removing: /var/run/dpdk/spdk1/mp_socket 00:33:26.467 Removing: /var/run/dpdk/spdk2/config 00:33:26.467 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:26.467 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:26.467 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:26.467 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:26.467 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:26.467 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:26.467 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:26.467 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:26.467 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:26.467 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:26.467 Removing: /var/run/dpdk/spdk3/config 00:33:26.467 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:26.467 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:26.467 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:26.467 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:26.467 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:26.467 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:26.467 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:26.467 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:26.467 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:26.467 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:26.467 Removing: /var/run/dpdk/spdk4/config 00:33:26.467 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:26.467 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:26.467 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:26.467 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:26.467 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:26.467 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:26.467 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:26.467 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:26.467 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:26.467 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:26.467 Removing: /dev/shm/bdev_svc_trace.1 00:33:26.467 Removing: /dev/shm/nvmf_trace.0 00:33:26.467 Removing: /dev/shm/spdk_tgt_trace.pid4058738 00:33:26.467 Removing: /var/run/dpdk/spdk0 00:33:26.467 Removing: /var/run/dpdk/spdk1 00:33:26.467 Removing: /var/run/dpdk/spdk2 00:33:26.467 Removing: /var/run/dpdk/spdk3 00:33:26.467 Removing: /var/run/dpdk/spdk4 00:33:26.467 Removing: /var/run/dpdk/spdk_pid101747 00:33:26.467 Removing: /var/run/dpdk/spdk_pid101755 00:33:26.467 Removing: /var/run/dpdk/spdk_pid11131 00:33:26.467 Removing: /var/run/dpdk/spdk_pid113536 00:33:26.467 Removing: /var/run/dpdk/spdk_pid113948 00:33:26.467 Removing: /var/run/dpdk/spdk_pid114473 00:33:26.467 Removing: /var/run/dpdk/spdk_pid114878 00:33:26.467 Removing: /var/run/dpdk/spdk_pid115468 00:33:26.467 Removing: /var/run/dpdk/spdk_pid115868 00:33:26.467 Removing: /var/run/dpdk/spdk_pid116279 00:33:26.467 Removing: /var/run/dpdk/spdk_pid116683 00:33:26.467 Removing: /var/run/dpdk/spdk_pid119186 00:33:26.467 Removing: /var/run/dpdk/spdk_pid119331 00:33:26.467 Removing: /var/run/dpdk/spdk_pid123130 00:33:26.467 Removing: /var/run/dpdk/spdk_pid123303 00:33:26.467 Removing: /var/run/dpdk/spdk_pid12442 00:33:26.467 Removing: /var/run/dpdk/spdk_pid12476 00:33:26.467 Removing: /var/run/dpdk/spdk_pid124914 00:33:26.467 Removing: /var/run/dpdk/spdk_pid12606 00:33:26.467 Removing: /var/run/dpdk/spdk_pid12734 00:33:26.467 Removing: /var/run/dpdk/spdk_pid130568 00:33:26.467 Removing: /var/run/dpdk/spdk_pid130578 00:33:26.467 Removing: /var/run/dpdk/spdk_pid13111 00:33:26.467 Removing: /var/run/dpdk/spdk_pid133490 00:33:26.467 Removing: /var/run/dpdk/spdk_pid134772 00:33:26.467 Removing: /var/run/dpdk/spdk_pid136301 00:33:26.467 Removing: /var/run/dpdk/spdk_pid137042 00:33:26.467 Removing: /var/run/dpdk/spdk_pid138573 00:33:26.467 Removing: /var/run/dpdk/spdk_pid139455 00:33:26.467 Removing: /var/run/dpdk/spdk_pid14367 00:33:26.467 Removing: /var/run/dpdk/spdk_pid144793 00:33:26.467 Removing: /var/run/dpdk/spdk_pid145137 00:33:26.467 Removing: /var/run/dpdk/spdk_pid145528 00:33:26.467 Removing: /var/run/dpdk/spdk_pid147090 00:33:26.467 Removing: /var/run/dpdk/spdk_pid147368 00:33:26.467 Removing: /var/run/dpdk/spdk_pid147764 00:33:26.467 Removing: /var/run/dpdk/spdk_pid150211 00:33:26.467 Removing: /var/run/dpdk/spdk_pid150214 00:33:26.467 Removing: /var/run/dpdk/spdk_pid15087 00:33:26.467 Removing: /var/run/dpdk/spdk_pid151557 00:33:26.467 Removing: /var/run/dpdk/spdk_pid15410 00:33:26.467 Removing: /var/run/dpdk/spdk_pid17021 00:33:26.467 Removing: /var/run/dpdk/spdk_pid17442 00:33:26.467 Removing: /var/run/dpdk/spdk_pid17888 00:33:26.467 Removing: /var/run/dpdk/spdk_pid20278 00:33:26.467 Removing: /var/run/dpdk/spdk_pid23658 00:33:26.467 Removing: /var/run/dpdk/spdk_pid27083 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4057030 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4057781 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4058738 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4059227 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4059919 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4060055 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4060792 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4060802 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4061060 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4062254 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4063180 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4063486 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4063681 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4063896 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4064151 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4064385 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4064551 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4064737 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4065335 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4067811 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4068295 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4068756 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4068782 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4069220 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4069230 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4069668 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4069672 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4069970 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4069978 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4070147 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4070228 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4070667 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4070827 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4071027 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4071227 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4071368 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4071572 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4071736 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4071901 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4072177 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4072348 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4072507 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4072790 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4072957 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4073122 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4073396 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4073570 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4073729 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4074011 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4074175 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4074341 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4074613 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4074790 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4074953 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4075237 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4075404 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4075607 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4075765 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4075989 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4078184 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4131540 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4134161 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4139893 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4143071 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4145550 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4145948 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4153209 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4153211 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4153822 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4154404 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4155062 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4155464 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4155466 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4155708 00:33:26.467 Removing: /var/run/dpdk/spdk_pid4155739 00:33:26.726 Removing: /var/run/dpdk/spdk_pid4155747 00:33:26.726 Removing: /var/run/dpdk/spdk_pid4156404 00:33:26.726 Removing: /var/run/dpdk/spdk_pid4157052 00:33:26.726 Removing: /var/run/dpdk/spdk_pid4157646 00:33:26.726 Removing: /var/run/dpdk/spdk_pid4158171 00:33:26.726 Removing: /var/run/dpdk/spdk_pid4158228 00:33:26.726 Removing: /var/run/dpdk/spdk_pid4158377 00:33:26.726 Removing: /var/run/dpdk/spdk_pid4159765 00:33:26.726 Removing: /var/run/dpdk/spdk_pid4160494 00:33:26.726 Removing: /var/run/dpdk/spdk_pid4165868 00:33:26.726 Removing: /var/run/dpdk/spdk_pid4166143 00:33:26.726 Removing: /var/run/dpdk/spdk_pid4168663 00:33:26.726 Removing: /var/run/dpdk/spdk_pid4172373 00:33:26.726 Removing: /var/run/dpdk/spdk_pid4174502 00:33:26.726 Removing: /var/run/dpdk/spdk_pid4180811 00:33:26.726 Removing: /var/run/dpdk/spdk_pid4186020 00:33:26.726 Removing: /var/run/dpdk/spdk_pid4187205 00:33:26.726 Removing: /var/run/dpdk/spdk_pid4187871 00:33:26.726 Removing: /var/run/dpdk/spdk_pid4931 00:33:26.726 Removing: /var/run/dpdk/spdk_pid50465 00:33:26.726 Removing: /var/run/dpdk/spdk_pid53101 00:33:26.726 Removing: /var/run/dpdk/spdk_pid56792 00:33:26.726 Removing: /var/run/dpdk/spdk_pid57828 00:33:26.726 Removing: /var/run/dpdk/spdk_pid59038 00:33:26.726 Removing: /var/run/dpdk/spdk_pid62093 00:33:26.726 Removing: /var/run/dpdk/spdk_pid64347 00:33:26.726 Removing: /var/run/dpdk/spdk_pid68576 00:33:26.726 Removing: /var/run/dpdk/spdk_pid68581 00:33:26.726 Removing: /var/run/dpdk/spdk_pid71394 00:33:26.726 Removing: /var/run/dpdk/spdk_pid7158 00:33:26.726 Removing: /var/run/dpdk/spdk_pid71623 00:33:26.726 Removing: /var/run/dpdk/spdk_pid71756 00:33:26.726 Removing: /var/run/dpdk/spdk_pid72019 00:33:26.726 Removing: /var/run/dpdk/spdk_pid72035 00:33:26.726 Removing: /var/run/dpdk/spdk_pid73114 00:33:26.726 Removing: /var/run/dpdk/spdk_pid74406 00:33:26.726 Removing: /var/run/dpdk/spdk_pid75585 00:33:26.726 Removing: /var/run/dpdk/spdk_pid76767 00:33:26.726 Removing: /var/run/dpdk/spdk_pid77947 00:33:26.726 Removing: /var/run/dpdk/spdk_pid79129 00:33:26.726 Removing: /var/run/dpdk/spdk_pid82750 00:33:26.726 Removing: /var/run/dpdk/spdk_pid83131 00:33:26.726 Removing: /var/run/dpdk/spdk_pid84147 00:33:26.726 Removing: /var/run/dpdk/spdk_pid84739 00:33:26.726 Removing: /var/run/dpdk/spdk_pid88195 00:33:26.726 Removing: /var/run/dpdk/spdk_pid90168 00:33:26.726 Removing: /var/run/dpdk/spdk_pid94091 00:33:26.726 Removing: /var/run/dpdk/spdk_pid97409 00:33:26.726 Removing: /var/run/dpdk/spdk_pid9948 00:33:26.726 Clean 00:33:26.726 06:59:31 -- common/autotest_common.sh@1437 -- # return 0 00:33:26.726 06:59:31 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:33:26.726 06:59:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:26.726 06:59:31 -- common/autotest_common.sh@10 -- # set +x 00:33:26.726 06:59:31 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:33:26.726 06:59:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:26.726 06:59:31 -- common/autotest_common.sh@10 -- # set +x 00:33:26.984 06:59:31 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:26.984 06:59:31 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:26.984 06:59:31 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:26.984 06:59:31 -- spdk/autotest.sh@389 -- # hash lcov 00:33:26.984 06:59:31 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:26.984 06:59:31 -- spdk/autotest.sh@391 -- # hostname 00:33:26.984 06:59:31 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:26.984 geninfo: WARNING: invalid characters removed from testname! 00:33:59.087 06:59:59 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:59.087 07:00:02 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:01.621 07:00:05 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:04.145 07:00:08 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:07.422 07:00:11 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:09.966 07:00:14 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:13.247 07:00:17 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:13.247 07:00:17 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:13.247 07:00:17 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:13.247 07:00:17 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:13.247 07:00:17 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:13.247 07:00:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.247 07:00:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.247 07:00:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.247 07:00:17 -- paths/export.sh@5 -- $ export PATH 00:34:13.247 07:00:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.247 07:00:17 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:34:13.247 07:00:17 -- common/autobuild_common.sh@435 -- $ date +%s 00:34:13.247 07:00:17 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713330017.XXXXXX 00:34:13.247 07:00:17 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713330017.mVMrsg 00:34:13.247 07:00:17 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:34:13.247 07:00:17 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:34:13.247 07:00:17 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:34:13.247 07:00:17 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:34:13.247 07:00:17 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:34:13.247 07:00:17 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:34:13.247 07:00:17 -- common/autobuild_common.sh@451 -- $ get_config_params 00:34:13.247 07:00:17 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:34:13.247 07:00:17 -- common/autotest_common.sh@10 -- $ set +x 00:34:13.247 07:00:17 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:34:13.247 07:00:17 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:34:13.247 07:00:17 -- pm/common@17 -- $ local monitor 00:34:13.247 07:00:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:13.247 07:00:17 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=162365 00:34:13.247 07:00:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:13.247 07:00:17 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=162367 00:34:13.247 07:00:17 -- pm/common@21 -- $ date +%s 00:34:13.247 07:00:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:13.247 07:00:17 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=162369 00:34:13.247 07:00:17 -- pm/common@21 -- $ date +%s 00:34:13.247 07:00:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:13.247 07:00:17 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=162373 00:34:13.247 07:00:17 -- pm/common@21 -- $ date +%s 00:34:13.247 07:00:17 -- pm/common@26 -- $ sleep 1 00:34:13.247 07:00:17 -- pm/common@21 -- $ date +%s 00:34:13.247 07:00:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713330017 00:34:13.247 07:00:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713330017 00:34:13.247 07:00:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713330017 00:34:13.247 07:00:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713330017 00:34:13.247 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713330017_collect-vmstat.pm.log 00:34:13.247 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713330017_collect-bmc-pm.bmc.pm.log 00:34:13.247 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713330017_collect-cpu-load.pm.log 00:34:13.247 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713330017_collect-cpu-temp.pm.log 00:34:13.828 07:00:18 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:34:13.828 07:00:18 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:34:13.828 07:00:18 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:13.828 07:00:18 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:13.828 07:00:18 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:34:13.828 07:00:18 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:13.828 07:00:18 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:13.828 07:00:18 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:13.828 07:00:18 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:13.828 07:00:18 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:13.828 07:00:18 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:13.828 07:00:18 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:13.828 07:00:18 -- pm/common@30 -- $ signal_monitor_resources TERM 00:34:13.828 07:00:18 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:34:13.828 07:00:18 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:13.828 07:00:18 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:34:13.828 07:00:18 -- pm/common@45 -- $ pid=162382 00:34:13.828 07:00:18 -- pm/common@52 -- $ sudo kill -TERM 162382 00:34:13.828 07:00:18 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:13.828 07:00:18 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:34:13.828 07:00:18 -- pm/common@45 -- $ pid=162384 00:34:13.828 07:00:18 -- pm/common@52 -- $ sudo kill -TERM 162384 00:34:13.828 07:00:18 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:13.828 07:00:18 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:34:13.828 07:00:18 -- pm/common@45 -- $ pid=162381 00:34:13.828 07:00:18 -- pm/common@52 -- $ sudo kill -TERM 162381 00:34:13.828 07:00:18 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:13.828 07:00:18 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:34:13.828 07:00:18 -- pm/common@45 -- $ pid=162383 00:34:13.828 07:00:18 -- pm/common@52 -- $ sudo kill -TERM 162383 00:34:14.087 + [[ -n 3953039 ]] 00:34:14.087 + sudo kill 3953039 00:34:14.098 [Pipeline] } 00:34:14.118 [Pipeline] // stage 00:34:14.123 [Pipeline] } 00:34:14.144 [Pipeline] // timeout 00:34:14.149 [Pipeline] } 00:34:14.167 [Pipeline] // catchError 00:34:14.172 [Pipeline] } 00:34:14.190 [Pipeline] // wrap 00:34:14.196 [Pipeline] } 00:34:14.211 [Pipeline] // catchError 00:34:14.221 [Pipeline] stage 00:34:14.223 [Pipeline] { (Epilogue) 00:34:14.237 [Pipeline] catchError 00:34:14.239 [Pipeline] { 00:34:14.252 [Pipeline] echo 00:34:14.254 Cleanup processes 00:34:14.259 [Pipeline] sh 00:34:14.557 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:14.557 162510 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:34:14.557 162646 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:14.571 [Pipeline] sh 00:34:14.849 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:14.849 ++ grep -v 'sudo pgrep' 00:34:14.849 ++ awk '{print $1}' 00:34:14.849 + sudo kill -9 162510 00:34:14.860 [Pipeline] sh 00:34:15.140 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:23.270 [Pipeline] sh 00:34:23.547 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:23.547 Artifacts sizes are good 00:34:23.558 [Pipeline] archiveArtifacts 00:34:23.564 Archiving artifacts 00:34:23.757 [Pipeline] sh 00:34:24.036 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:24.050 [Pipeline] cleanWs 00:34:24.060 [WS-CLEANUP] Deleting project workspace... 00:34:24.060 [WS-CLEANUP] Deferred wipeout is used... 00:34:24.066 [WS-CLEANUP] done 00:34:24.068 [Pipeline] } 00:34:24.088 [Pipeline] // catchError 00:34:24.101 [Pipeline] sh 00:34:24.379 + logger -p user.info -t JENKINS-CI 00:34:24.387 [Pipeline] } 00:34:24.404 [Pipeline] // stage 00:34:24.410 [Pipeline] } 00:34:24.426 [Pipeline] // node 00:34:24.431 [Pipeline] End of Pipeline 00:34:24.533 Finished: SUCCESS